Compare commits

..

651 Commits

Author SHA1 Message Date
patchback[bot]
743591cedc [PR #11926/c0d3464f backport][stable-12] crypttab: fix option parsing when value contains multiple equal signs (#11929)
crypttab: fix option parsing when value contains multiple equal signs (#11926)

* fix(crypttab): preserve option values containing multiple equal signs

Fixes #4963



* fix(crypttab): add changelog fragment for PR 11926



---------


(cherry picked from commit c0d3464fa7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-28 21:27:44 +02:00
patchback[bot]
412a348738 [PR #11918/89d82ab9 backport][stable-12] scaleway: fix NoneType error in get_resources() (#11924)
scaleway: fix NoneType error in get_resources() (#11918)

* scaleway: fix NoneType error in get_resources() when API returns empty or non-JSON response

* add changelog fragment for #11918

* Update changelogs/fragments/11361-scaleway-get-resources-none-type.yml



---------



(cherry picked from commit 89d82ab9df)

Co-authored-by: RealCharlesChia <161665317+RealCharlesChia@users.noreply.github.com>
Co-authored-by: RealCharlesChia <RealCharlesChia@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-27 21:13:47 +02:00
Felix Fontein
020fcb251f Prepare 12.6.1. 2026-04-22 20:51:30 +02:00
patchback[bot]
1ea0904e69 [PR #11912/7db237aa backport][stable-12] Add Python 3.15 to CI (#11915)
Add Python 3.15 to CI (#11912)

Add Python 3.15 to CI.

(cherry picked from commit 7db237aaa4)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-22 20:50:32 +02:00
patchback[bot]
90aa3ec24d [PR #11909/d57a6672 backport][stable-12] Replace default favicon URL again (#11913)
Replace default favicon URL again (#11909)

* replace default favicon URL

* add changelog fragment for PR 11909

* fix syntax for change fragment



* use higher res favicon by default

---------



(cherry picked from commit d57a667274)

Co-authored-by: Lars Krahl <57526005+mmslkr@users.noreply.github.com>
Co-authored-by: Lars Krahl <lars.krahl@telekom.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-04-22 20:17:17 +02:00
patchback[bot]
2b64eb69be [PR #11901/9ef1dbb6 backport][stable-12] Move ansible-core 2.18 to EOL CI (#11904)
Move ansible-core 2.18 to EOL CI (#11901)

Move ansible-core 2.18 to EOL CI.

(cherry picked from commit 9ef1dbb6d5)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-20 17:03:32 +02:00
patchback[bot]
9233243c13 [PR #11898/6b5bf0a0 backport][stable-12] Fix FQCNs in examples (#11902)
Fix FQCNs in examples (#11898)

Fix FQCNs in examples.

(cherry picked from commit 6b5bf0a0bc)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-20 15:20:24 +02:00
Felix Fontein
6407d59323 The next release will be 12.6.1. 2026-04-20 13:56:57 +02:00
Felix Fontein
25b09239f6 Release 12.6.0. 2026-04-20 12:34:53 +02:00
patchback[bot]
524aa8bab4 [PR #11840/7ce198f0 backport][stable-12] keycloak modules: add missing author credit (#11895)
keycloak modules: add missing author credit (#11840)

keycloak modules: add missing author credit for contributions

Added myself (@koke1997) to the author list of three modules
I contributed to in PRs #11468, #11470, #11471, and #11473 but forgot
to include at the time. Also signing up as maintainer for these modules
in .github/BOTMETA.yml so the bot can route related issues and PRs.

(cherry picked from commit 7ce198f0e7)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
2026-04-20 12:17:16 +02:00
patchback[bot]
09bea0031d [PR #11892/3325b854 backport][stable-12] Fix typo (#11894)
Fix typo (#11892)

(cherry picked from commit 3325b854ee)

Co-authored-by: Matt Williams <matt@milliams.com>
2026-04-20 12:17:03 +02:00
patchback[bot]
be4cf3ba4d [PR #11735/3e9689b1 backport][stable-12] jira - resolve Cloud assignee email to account ID via user search (#11891)
jira - resolve Cloud assignee email to account ID via user search (#11735)

* jira - resolve Cloud assignee email to account ID via user search

When cloud=true and assignee contains '@', look up a unique user with
GET /rest/api/2/user/search and use accountId for create, transition,
and edit. Document Jira Cloud vs Server/Data Center assignee behavior.

Fixes https://github.com/ansible-collections/community.general/issues/11734

Assisted-by AI: Claude 4.6 Opus (Anthropic) via Cursor IDE



* * Using urllib.parse.quote for URL encoding
* Adding "added in version" note for assignee when resolving account_id from email



* * Added cached variable 'user_email'
* Changed comparison to handle missing email safely
* Updated error message formatting to use repr-style values



* jira - adjust assignee and cloud descriptions (#11734)



* jira - resolve user-type field emails to account IDs on Jira Cloud (#11734)

When cloud=true, user-type fields (assignee, reporter, and any listed
in the new custom_user_fields parameter) that contain '@' are resolved
from email to Jira Cloud account ID via the user search API. Strings
without '@' are assumed to be account IDs. Add custom_user_fields
parameter for user to declare additional custom fields of user type.



* jira - address PR 11735 review (docs, assignee path, errors, naming)

- Clarify O(custom_user_fields): built-ins stay automatic; list extra
  user-typed fields without implying they are only custom-field IDs.
- On Jira Cloud, set assignee from the module param as a plain string and
  let resolve_user_fields() map it to accountId (including email lookup).
- Drop redundant ``or []`` when merging O(custom_user_fields) with the
  built-in user field list.
- Use public names USER_FIELDS, resolve_user_fields, and resolve_account_id
  (no leading underscore) per reviewer preference.
- Quote field name and email in resolution errors with explicit "…" text
  instead of repr-style !r, keeping values readable in failure messages.

Refs: https://github.com/ansible-collections/community.general/pull/11735

AI-assisted: Composer 2 (Anthropic) via Cursor IDE



* Changing fail_json formatting



* formatting fixes



* jira - fixing assignee as module option in description



---------


(cherry picked from commit 3e9689b13d)

Signed-off-by: Vladimir Vasilev <vvasilev@redhat.com>
Co-authored-by: vladi-k <53343355+vladi-k@users.noreply.github.com>
2026-04-20 09:36:05 +02:00
patchback[bot]
03da9164d1 [PR #11888/5b409fac backport][stable-12] filesystem - migrate LVM.get_fs_size() to use CmdRunner (#11889)
filesystem - migrate LVM.get_fs_size() to use CmdRunner (#11888)

* filesystem - migrate LVM.get_fs_size() to use CmdRunner



* filesystem - add changelog fragment for #11888



---------


(cherry picked from commit 5b409facbe)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
2026-04-20 09:35:55 +02:00
patchback[bot]
d99b778fa1 [PR #11887/9f80d89f backport][stable-12] lvol - migrate to CmdRunner (#11890)
lvol - migrate to CmdRunner (#11887)

* lvol - migrate to CmdRunner



* lvol - add changelog fragment for #11887



* adjust the changelog fragment

---------


(cherry picked from commit 9f80d89fc3)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-20 09:35:47 +02:00
patchback[bot]
fa179e6d0c [PR #11694/180da98a backport][stable-12] ipa_dnsrecord: add exclusive parameter for append-without-replace semantics (#11885)
ipa_dnsrecord: add `exclusive` parameter for append-without-replace semantics (#11694)

* ipa_dnsrecord: add solo parameter for append-without-replace semantics

Fixes #682

Adds O(solo) boolean parameter (default true, preserving current
replace behaviour) consistent with other DNS modules such as
community.general.dnsimple. When solo=false, only values not
already present in IPA are added, leaving existing values untouched.



* ipa_dnsrecord: rename solo parameter to exclusive

Rename O(solo) to O(exclusive) following reviewer feedback.
'exclusive' is the established Ansible convention for this semantic
(e.g. community.general.ini_file), while 'solo' implies single-value
DNS records.



* ipa_dnsrecord: fix changelog fragment symbol markup

Use double backticks per RST convention in changelog fragments,
not the O() macro (which is for module docstrings).



* Update plugins/modules/ipa_dnsrecord.py



* ipa_dnsrecord: implement exclusive semantics for state=absent

- exclusive=true + state=absent: remove all existing values of that
  record type and name, ignoring the specified record_value(s)
- exclusive=false + state=absent: remove only the specified values
  that actually exist in IPA, preserving all others

Also updates the exclusive parameter documentation to cover both
state=present and state=absent behaviour.



---------



(cherry picked from commit 180da98a7c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-19 23:07:11 +02:00
patchback[bot]
dc79f4a170 [PR #11860/25b21183 backport][stable-12] udm_user, homectl - replace crypt/legacycrypt with passlib (#11884)
udm_user, homectl - replace crypt/legacycrypt with passlib (#11860)

* udm_user - replace crypt/legacycrypt with passlib

The stdlib crypt module was removed in Python 3.13. Replace the
crypt/legacycrypt import chain with passlib (already used elsewhere
in the collection) and use CryptContext.verify() for password
comparison.

Fixes #4690



* Add changelog fragment for PR 11860



* remove redundant ignore file entries

* udm_user, homectl - replace crypt/legacycrypt with _crypt module utils

Add a new _crypt module_utils that abstracts password hashing and
verification. It uses passlib when available, falling back to the
stdlib crypt or legacycrypt, and raises ImportError if none of them
can be imported. Both udm_user and homectl now use this shared
utility, fixing compatibility with Python 3.13+.

Fixes #4690



* Add BOTMETA entry for _crypt module utils



* _crypt - fix mypy errors and handle complete unavailability

Replace CryptContext = object fallback (rejected by mypy) with a
proper dummy class definition. Add has_crypt_context flag so modules
can detect when no backend is available. Update both modules to
import and check has_crypt_context instead of testing for None.



* adjsutments from review

* Update plugins/modules/homectl.py



* Update plugins/modules/udm_user.py



---------



(cherry picked from commit 25b21183bb)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-19 22:51:20 +02:00
patchback[bot]
748882dfa8 [PR #11879/77509be2 backport][stable-12] Replace .format() calls with f-strings across multiple plugins (#11881)
Replace .format() calls with f-strings across multiple plugins (#11879)

* Replace .format() calls with f-strings across multiple plugins



* Add changelog fragment for PR 11879



---------


(cherry picked from commit 77509be2aa)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
2026-04-19 12:49:29 +02:00
patchback[bot]
6458abb9c1 [PR #11838/d0d213a4 backport][stable-12] homebrew_cask: fix false failure on upgrade of latest-versioned casks (#11880)
homebrew_cask: fix false failure on upgrade of latest-versioned casks (#11838)

* homebrew_cask: fix false failure on upgrade of latest-versioned casks

Use rc == 0 to determine upgrade success, consistent with _upgrade_all().
The previous check (_current_cask_is_installed() and not _current_cask_is_outdated())
was unreliable: for `latest`-versioned casks under --greedy, brew may still
list the cask as outdated after a successful upgrade, causing the task to fail
with a harmless warning from stderr as the error message.

Fixes #1647



* homebrew_cask: add changelog fragment for #11838



* Fix changelog fragment

---------


(cherry picked from commit d0d213a41d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-19 12:49:23 +02:00
patchback[bot]
09201bf49e [PR #11878/1b0b8d5c backport][stable-12] gitlab_project_variable - use find_project() for graceful error handling (#11882)
gitlab_project_variable - use find_project() for graceful error handling (#11878)

* gitlab_project_variable - use find_project() for consistent error handling

Replace the bare projects.get() call in GitlabProjectVariables.get_project()
with find_project() from module_utils/gitlab, which all other GitLab modules
already use. This ensures a graceful fail_json (with a clear error message)
when the project is not found, rather than an unhandled GitlabGetError
propagating as a module traceback.



* Add changelog fragment for PR 11878



* minor adjustment in f-string

---------


(cherry picked from commit 1b0b8d5cc1)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 (1M context) <noreply@anthropic.com>
2026-04-19 12:49:16 +02:00
patchback[bot]
973b5be063 [PR #11824/39f4cda6 backport][stable-12] locale_gen: support locales not yet listed in /etc/locale.gen (#11883)
locale_gen: support locales not yet listed in /etc/locale.gen (#11824)

* locale_gen: support locales not yet listed in /etc/locale.gen

On systems like Gentoo where /etc/locale.gen starts with only a handful
of commented examples, set_locale_glibc() now appends missing locale
entries sourced from /usr/share/i18n/SUPPORTED instead of silently
doing nothing. Also extracts the shared locale-entry regex into a
module-level constant RE_LOCALE_ENTRY.



* locale_gen: add changelog fragment for issue 2399



---------


(cherry picked from commit 39f4cda6b5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-19 12:49:10 +02:00
patchback[bot]
449a179d8f [PR #11750/6c809dd9 backport][stable-12] pacemaker: fix race condition on resource creation (#11877)
pacemaker: fix race condition on resource creation (#11750)

* remove pacemaker wait arg and fix race condition

* fix up pacemaker resource and stonith polling

* add changelog for pacemaker timeout bug

* remove env from test case and fix changelog file name

* Update changelogs/fragments/11750-pacemaker-wait-race-condition.yml



---------


(cherry picked from commit 6c809dd9db)

Co-authored-by: munchtoast <45038532+munchtoast@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-18 22:55:56 +02:00
patchback[bot]
27ca6be10a [PR #11813/edf8f249 backport][stable-12] parted: add unit_preserve_case option to fix unit case in return value (#11875)
parted: add unit_preserve_case option to fix unit case in return value (#11813)

* parted: add unit_preserve_case option to fix unit case in return value

Adds O(unit_preserve_case) feature flag (bool, default None) to control
the case of the ``unit`` field in the module return value.

Previously the unit was always lowercased (e.g. ``kib``), making it
impossible to feed ``disk.unit`` back as the ``unit`` parameter without
a validation error. With O(unit_preserve_case=true) the unit is returned
in its original mixed case (e.g. ``KiB``), matching the accepted input
values.

The default (None) emits a deprecation notice; the default will become
V(true) in community.general 14.0.0.

Fixes #1860



* parted: add changelog fragment for PR #11813



* adjustments from review

* Comment 15.0.0 deprecation in option decription.

* parted: fix unit test calls to parse_partition_info after signature change



* parted: fix unit_preserve_case - parted outputs lowercase units in machine mode

Parted's machine-parseable output always uses lowercase unit suffixes
(e.g. ``kib``, ``mib``) regardless of what was passed to the ``unit``
parameter. Removing the explicit ``.lower()`` call was therefore not
enough to preserve case.

Add a ``canonical_unit()`` helper that maps a unit string to its canonical
mixed-case form using ``parted_units`` as the reference, and use it
instead of a bare identity when ``unit_preserve_case=true``.

Also fix a yamllint violation in the DOCUMENTATION block (missing space
after ``#`` in inline comments).



* Update plugins/modules/parted.py



* Update plugins/modules/parted.py



---------



(cherry picked from commit edf8f24959)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-18 22:55:45 +02:00
patchback[bot]
3867300eca [PR #11839/afe9de75 backport][stable-12] homebrew_service: remove redundant code (#11876)
homebrew_service: remove redundant code (#11839)

* homebrew_service: remove redundant code

* homebrew_services: add changelog fragment for #11839



---------


(cherry picked from commit afe9de7562)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-18 22:55:25 +02:00
patchback[bot]
ae05131a54 [PR #11702/314863e3 backport][stable-12] xenserver_guest: changed cdrom handling for userdevice != 3, fixes #11624 (#11872)
xenserver_guest: changed cdrom handling for userdevice != 3, fixes #11624 (#11702)

* xenserver_guest: changed cdrom handling for userdevice != 3, fixes #11624

  - CD-ROM handling code has been moved before disk handling code. This more
    closely mimics XenCenter/XCP-ng Center behavior. CD-ROM device, if added,
    will now grab position 3 before any disk grabs it.
  - Position 3 is now skipped when adding disks to leave it reserved for
    CD-ROM device. If any disk ends up occupying position 3 and CD-ROM
    device ends up pushed to position above 3, booting from ISO is not
    possible (#11624).

* Added changelog fragment for #11702

* Added missing issue and PR URLs to changelog fragment for #11702

* Fixed changelog fragment for #11702

(cherry picked from commit 314863e3a7)

Co-authored-by: Bojan Vitnik <bvitnik@yahoo.com>
2026-04-17 18:55:33 +02:00
patchback[bot]
f67003cf3a [PR #11835/3416efa5 backport][stable-12] lvg - migrate to CmdRunner (#11858)
lvg - migrate to CmdRunner (#11835)


(cherry picked from commit 3416efa5bf)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 18:33:10 +02:00
patchback[bot]
2bd64a891c [PR #11817/175808d9 backport][stable-12] consul_kv: add ca_path option for custom CA certificate (#11852)
consul_kv: add ca_path option for custom CA certificate (#11817)

* consul_kv: add ca_path option for custom CA certificate

Adds ca_path parameter to both the consul_kv module and consul_kv lookup
plugin, allowing users to specify a CA bundle for HTTPS connections instead
of being limited to toggling certificate validation on/off.



* consul_kv: add changelog fragment for PR #11817



* consul_kv: address review comments from felixfontein

- Fix verify logic: ca_path is ignored when validate_certs=false
- Improve validate_certs description to nudge users toward ca_path



---------


(cherry picked from commit 175808d997)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 18:33:02 +02:00
patchback[bot]
6e226f4588 [PR #11812/e2a7dc46 backport][stable-12] sefcontext: flush in-process matchpathcon cache (#11854)
sefcontext: flush in-process matchpathcon cache (#11812)

* fix sefcontext: flush in-process matchpathcon cache after changes

Fixes https://github.com/ansible-collections/community.general/issues/888



* update changelog fragment with PR number and URL



---------


(cherry picked from commit e2a7dc467d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 18:32:53 +02:00
patchback[bot]
d82bf01128 [PR #11849/74c096b0 backport][stable-12] homebrew_cask: handle placeholder version from brew --version (#11855)
homebrew_cask: handle placeholder version from brew --version (#11849)

* homebrew_cask: handle placeholder version from brew --version

When brew is run as the wrong user, git repositories may be owned by
a different user, causing brew --version to output a placeholder like
"Homebrew >= 4.3.0 (shallow or no git repository)" instead of the real
version. The parsed version would then be lower than the 2.6.0 threshold,
causing _brew_cask_command_is_deprecated() to return False and the module
to use the disabled "brew cask" command syntax.

Detect the ">=" prefix in the parsed version and treat it as a modern
installation.

Fixes #4708



* homebrew_cask: add changelog fragment for #11849



---------


(cherry picked from commit 74c096b00c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 18:32:45 +02:00
patchback[bot]
2873d439c3 [PR #11764/e9110811 backport][stable-12] logrotate: fix parameter and config file validation and more (#11856)
logrotate: fix parameter and config file validation and more (#11764)

* fix(logrotate): add missing defaults and parameter validation declarations

- Add default="present" to state parameter
- Add default="/etc/logrotate.d" to config_dir parameter
- Add required_by declarations for shred and compression parameters

* fix(logrotate): fix runtime validation bugs, remove duplicate checks

- Fix shred_cycles TypeError when value is None
- Fix enabled=None handling in get_config_path
- Remove duplicate runtime mutually_exclusive checks
- Add runtime boolean truthiness checks
- Add 'create' parameter format validation
- Remove stale test method

* fix(logrotate): restructure file operations, validate before write

- Write content to tmpdir temp file, validate, then atomic move to destination.
- Wrap all os.remove() calls in try/except with fail_json on error
- Wrap all module.atomic_move() calls in try/except with fail_json on error
- Also add self.mock_module.tmpdir = self.test_dir to test setUp for new code path

* docs(logrotate): update DOCUMENTATION block

- Add 'default: present' to state option
- Add 'default: /etc/logrotate.d' to config_dir option

* feat(logrotate): add optional backup parameter

* chore: add logrotate fixes changelog fragment

* chore(changelog/logrotate): use present tense singular

* fix(logrotate): handle trailing spaces in create param



* refactor(logrotate): remove redundant checks

These are already handled by `required_if` statements in the module spec

* refactor(logrotate): use tempfile to create temporary file

* refactor(logrotate): remove redundant `bool()` casts on `target_enabled`

`target_enabled` is guaranteed to be bool by this point. It's either the module param (typed bool) or falls back to `current_enabled` (also bool). The `bool()` wraps are no-ops.

* refactor(logrotate): remove unused `self.config_file` attribute

* refactor(logrotate): remove dead `any_state` parameter from `read_existing_config`

* fix(logrotate): raise error instead of falling through on enabled-state rename failures

* refactor(logrotate): tighten `get_config_path` sig to bool

`None` callers are removed now so this is safe

* test(logrotate): remove stale open mock assertion after tempfile refactor

* style(logrotate): format file

* chore(logrotate): add missing `version_added` attribute



* fix(logrotate): clean up temp file



* fix(logrotate): remove redundant temp file cleanup



* refactor(logrotate): Use dict subscript to access required backup param



* fix(logrotate): fix: only remove old config file when path differs from target



* fix(logrotate): update logrotate_bin type hint to str

* feat(logrotate): add backup file handling when removing old config

* style(logrotate): format file

* test(logrotate): add missing backup default to `_setup_module_params`

* test(logrotate): fix incorrect `os.remove` assertion in update test

* refactor(logrotate): remove unnecessary `to_native()` call



* refactor(logrotate): replace str quotes with !r



* fix(logrotate): change backup default back to true

* fix(logrotate): raise error when `shred_cycle`s is set with `shred=false`

* docs(logrotate): clarify `shred_cycles` behaviour

* fix(logrotate): remove to_native calls for exception messages

* docs(logrotate): improve `config_dir` param description

* refactor(logrotate): simplify backup file assignment logic

* style(logrotate): format file

* docs(logrotate): improve config_map description

---------



(cherry picked from commit e911081102)

Co-authored-by: tigattack <10629864+tigattack@users.noreply.github.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-17 18:32:36 +02:00
patchback[bot]
956fc075ef [PR #11837/87ecfa34 backport][stable-12] iso_extract: retry umount on busy filesystem before cleanup (#11857)
iso_extract: retry umount on busy filesystem before cleanup (#11837)

* iso_extract: retry umount on busy filesystem before cleanup

Fixes #5333



* iso_extract: add changelog fragment for #11837



* make chglog more concise

---------


(cherry picked from commit 87ecfa3432)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 18:32:27 +02:00
patchback[bot]
7b82e694a2 [PR #11859/dad84dd3 backport][stable-12] udm_user - fix alias-to-canonical param name mismatch (#11863)
udm_user - fix alias-to-canonical param name mismatch (#11859)

* udm_user - fix alias-to-canonical param name mismatch

The loop that maps UDM object properties to module params iterated
over UDM keys (camelCase, e.g. displayName, primaryGroup) and looked
them up directly in module.params, which is keyed by canonical names
(snake_case, e.g. display_name, primary_group). This caused all
aliased params to be silently ignored.

Build an alias-to-canonical mapping from argument_spec and use it
to resolve UDM keys to the correct module.params entries.

Also fix the direct module.params["displayName"] access which raised
KeyError when the user did not explicitly use the alias form.

Fixes #2950
Fixes #3691



* Add changelog fragment for PR 11859



---------


(cherry picked from commit dad84dd36d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-17 18:32:17 +02:00
patchback[bot]
119623952d [PR #11848/c4ed3467 backport][stable-12] homebrew_tap: fix None in command, redundant brew tap calls, format strings, and drop no-op locale vars (#11865)
homebrew_tap: fix None in command, redundant brew tap calls, format strings, and drop no-op locale vars (#11848)

* homebrew_tap: fix None in command list, redundant brew tap calls, and bad format strings

- Fix None being injected into the run_command list when url is not
  provided to add_tap (filter with [opt for opt in [...] if opt])
- Reduce redundant `brew tap` calls: add_taps and remove_taps now
  fetch the tap list once upfront and pass it to the per-tap functions;
  already_tapped accepts an optional pre-fetched list to avoid re-running
  brew for every tap in a batch
- Fix mixed f-string/%-formatting in error messages in add_taps and
  remove_taps, replaced with plain f-strings



* homebrew_tap: simplify command construction in add_tap

Replace the opaque list comprehension filter with an explicit conditional
append — only url is ever optional, so testing the known-present items
was misleading.



* homebrew_tap: remove unnecessary locale env vars

Homebrew has no i18n/l10n support — all output is hardcoded English.
LANGUAGE=C and LC_ALL=C have no effect on brew output.



* homebrew_tap: add changelog fragment for #11848



* remove hombrew_tap from PR #11783 changelog - change reverted here

---------


(cherry picked from commit c4ed3467b6)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 18:32:10 +02:00
patchback[bot]
644d362228 [PR #11851/1db3d4f4 backport][stable-12] gitlab_project_members: fail when multiple projects match by name (#11864)
gitlab_project_members: fail when multiple projects match by name (#11851)

* gitlab_project_members: fail when multiple projects match by name

When the project parameter is a bare name (not a full path), and the
search returns more than one match, the module now fails with a clear
error asking the user to provide the full path (group/project) to
disambiguate, instead of silently operating on the first result.

Fixes #2767



* gitlab_project_members: improve code formatting



* gitlab_project_members: add changelog fragment for #11851



---------


(cherry picked from commit 1db3d4f441)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-17 18:32:02 +02:00
patchback[bot]
8def5bf46e [PR #11850/f8869af6 backport][stable-12] homebrew_cask: fix sudo_password failing with special characters (#11867)
homebrew_cask: fix sudo_password failing with special characters (#11850)

* homebrew_cask: fix sudo_password with special characters in password

The SUDO_ASKPASS script embedded the password inside single quotes, which
breaks shell parsing whenever the password contains a single quote. Use a
quoted heredoc (cat <<'SUDO_PASS') instead, which treats the content
completely literally regardless of special characters.

Also replace .file.close() with .flush() (correct semantics — flushes
the write buffer without leaving the NamedTemporaryFile in a half-closed
state) and remove the redundant add_cleanup_file() call (the context
manager already deletes the file on exit).

Fixes #4957



* homebrew_cask: add changelog fragment for #11850



* homebrew_cask: fix sudo_password example and clarify ansible_become_password



* homebrew_cask: use shlex.quote() for sudo_password instead of heredoc

shlex.quote() is the standard Python approach for shell-safe quoting
and handles all special characters without the edge cases of heredocs.



---------


(cherry picked from commit f8869af65f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-17 18:31:52 +02:00
patchback[bot]
057fd16cc0 [PR #11861/076bc4e0 backport][stable-12] etcd3 lookup - improve HTTPS connection handling and docs (#11871)
etcd3 lookup - improve HTTPS connection handling and docs (#11861)

* etcd3 lookup - improve HTTPS connection handling and documentation

Improve user experience when connecting to HTTPS etcd3 endpoints:
- Strip URL scheme from host option when present, with a warning
- Warn when HTTPS endpoint is specified but ca_cert is not provided
- Document that ca_cert is required to enable TLS
- Add HTTPS connection example
- Fix minor doc markup issue in notes section

Fixes #1664



* Add changelog fragment for PR 11861



---------


(cherry picked from commit 076bc4e03b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
2026-04-17 18:31:39 +02:00
patchback[bot]
3a56f19656 [PR #11862/342a76d5 backport][stable-12] Remove unstable CI target (#11870)
Remove unstable CI target (#11862)

Remove unstable CI target.

(cherry picked from commit 342a76d5dd)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-17 18:31:21 +02:00
patchback[bot]
729eb996e8 [PR #11836/ef656cb9 backport][stable-12] CI: Replace Fedora 43 with 44 for devel (#11846)
CI: Replace Fedora 43 with 44 for devel (#11836)

* Replace Fedora 43 with 44 for devel in CI.

* Adjust tests.

* Adjust flatpak module to Fedora 44.

(cherry picked from commit ef656cb9b6)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-16 22:00:37 +02:00
patchback[bot]
3e721f9572 [PR #11842/7884a3f2 backport][stable-12] CI: Temporarily skip failing callback unit tests for ansible-core 2.21+ (#11844)
CI: Temporarily skip failing callback unit tests for ansible-core 2.21+ (#11842)

Temporarily skip failing unit tests.

(cherry picked from commit 7884a3f2a2)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-16 21:39:55 +02:00
patchback[bot]
4d30704615 [PR #11826/7dcd3c1c backport][stable-12] lxd_container: document that config values must be strings (#11829)
lxd_container: document that config values must be strings (#11826)

Fixes #8307


(cherry picked from commit 7dcd3c1c45)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-15 22:01:36 +02:00
patchback[bot]
308a5d7e06 [PR #11823/71723268 backport][stable-12] lvol: fix LVM version regex to handle date formats without dashes (#11831)
lvol: fix LVM version regex to handle date formats without dashes (#11823)

* lvol: fix LVM version regex to handle date formats without dashes

Fixes #5445



* lvol: add changelog fragment for issue 5445



---------


(cherry picked from commit 7172326868)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-15 22:01:28 +02:00
patchback[bot]
f78a44c6a3 [PR #11825/d1448b76 backport][stable-12] iso_extract: strip leading path separator from file entries (#11832)
iso_extract: strip leading path separator from file entries (#11825)

* iso_extract: strip leading path separator from file entries

Fixes #5283



* iso_extract: add changelog fragment for issue 5283



---------


(cherry picked from commit d1448b76c1)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-15 22:01:19 +02:00
patchback[bot]
1b4e58bbbf [PR #11811/ff5c34c4 backport][stable-12] lvm_pv - use CmdRunner (#11833)
lvm_pv - use CmdRunner (#11811)

* lvm_pv - migrate to CmdRunner using shared runners from module_utils/_lvm



* lvm_pv - add changelog fragment for #11811



* Update changelogs/fragments/11811-lvm_pv-use-cmdrunner.yml



---------



(cherry picked from commit ff5c34c4a7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-15 22:01:11 +02:00
patchback[bot]
705ffc564d [PR #11771/df252e5f backport][stable-12] incus, machinectl, run0 - fix become over pty connections (#11827)
incus, machinectl, run0 - fix become over pty connections (#11771)

* incus, machinectl, run0 - fix become over pty connections

Four small fixes across three plugins, all discovered while trying to
use community.general.machinectl (and later community.general.run0)
as become methods over the community.general.incus connection.

Core bug: machinectl and run0 both set require_tty = True, but the
incus connection plugin was ignoring that hint and invoking
'incus exec' without -t. Honor require_tty by passing -t, mirroring
what the OpenSSH plugin does with -tt.

Once the pty is in place, both become plugins emit terminal control
sequences (window-title OSC, ANSI reset) around the child command
that land in captured stdout alongside the module JSON and trip the
result parser with "Module invocation had junk after the JSON data".
Suppress that decoration at the source by prefixing the constructed
shell command with SYSTEMD_COLORS=0. TERM=dumb would work too but
has a wider blast radius (it also affects interactive tools inside
the become-user session); SYSTEMD_COLORS is the documented
systemd-scoped knob.

run0 was also missing pipelining = False. When run0 is used over a
connection that honors require_tty, ansible's pipelining sends the
module source on stdin to remote python3, which cannot be forwarded
cleanly through the pty chain and hangs indefinitely. Disable
pipelining the same way community.general.machinectl already does.

Also add tests/unit/plugins/become/test_machinectl.py mirroring the
existing test_run0.py. machinectl had no unit test coverage before,
which is why CI did not catch the SYSTEMD_COLORS=0 prefix change
when the equivalent run0 change broke test_run0_basic/test_run0_flags.



* Update changelogs/fragments/11771-incus-machinectl-run0-become-pty.yml



---------



(cherry picked from commit df252e5fab)

Co-authored-by: Martin Schürrer <martin@schuerrer.org>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-04-15 22:01:06 +02:00
patchback[bot]
86dc3c8816 [PR #11815/78d004d9 backport][stable-12] lvg: clarify desired-state semantics of pvs parameter in docs (#11821)
lvg: clarify desired-state semantics of pvs parameter in docs (#11815)

lvg: doc adjustment
(cherry picked from commit 78d004d96e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-04-14 22:03:13 +02:00
patchback[bot]
1b1746896d [PR #11772/24ca7965 backport][stable-12] dconf: add dbus-broker support by improving D-Bus session discovery (#11820)
dconf: add dbus-broker support by improving D-Bus session discovery (#11772)

* dconf: add dbus-broker support by improving D-Bus session discovery

Extend DBusWrapper._get_existing_dbus_session() to check:
1. DBUS_SESSION_BUS_ADDRESS in the current process environment
2. /run/user/<uid>/bus (canonical socket for systemd and dbus-broker)
3. Process scan (legacy fallback, as before)

Also add _validate_address() to support both dbus-send and busctl,
making the module work on systems using dbus-broker (e.g. Fedora Silverblue)
where no process exposes DBUS_SESSION_BUS_ADDRESS in its environment.

Fixes: https://github.com/ansible-collections/community.general/issues/495



* dconf: add changelog fragment for dbus-broker support



* dconf: restore dbus validator requirement and example usage

Restore fail_json when neither dbus-send nor busctl is available,
preserving the original hard requirement for a validator binary.
Restore the example invocation in DBusWrapper docstring.



---------


(cherry picked from commit 24ca79658a)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-14 22:02:54 +02:00
patchback[bot]
32c4bbea30 [PR #11748/972bed66 backport][stable-12] flatpak: add from_url parameter, deprecate URLs in name (#11814)
flatpak: add from_url parameter, deprecate URLs in name (#11748)

* flatpak: add from_url parameter, deprecate URLs in name

Adds a new `from_url` parameter for installing flatpaks from a
.flatpakref URL, using `flatpak install --from <url>`. The `name`
parameter then carries the reverse DNS application ID, enabling
reliable idempotency checks.

Passing URLs directly in `name` is now deprecated and will be
removed in community.general 14.0.0.

Fixes #4000



* flatpak: add changelog fragment for PR #11748



* flatpak: remove deprecation, adjust docs tone



* flatpak: add integration tests for from_url parameter



---------


(cherry picked from commit 972bed66f4)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-13 21:30:31 +02:00
patchback[bot]
e278490fe7 [PR #11746/61060532 backport][stable-12] feat: use CmdRunner for LVM commands (#11794)
feat: use CmdRunner for LVM commands (#11746)

* feat: use CmdRunner for LVM commands

* Update plugins/module_utils/lvm.py



* rename module util to _lvm

---------


(cherry picked from commit 61060532f9)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-12 22:29:22 +02:00
patchback[bot]
d01187d03b [PR #11768/b40608a3 backport][stable-12] Ensure standard locale in run_command (group5-batch1) (#11795)
Ensure standard locale in run_command (group5-batch1) (#11768)

* ensure standard locale in run_command (group5-batch1)

Adds ``LANGUAGE=C`` and ``LC_ALL=C`` to ``run_command()`` calls in modules
that parse command output, to prevent locale-dependent parsing failures on
non-C-locale systems.

Modules updated: apache2_module, composer, facter_facts, known_hosts module
utils, lvg_rename, macports, modprobe, monit, open_iscsi, pacman_key,
rhsm_release, rpm_ostree_pkg, sysupgrade.



* add changelog fragment for group5-batch1



* Remove lvg_rename from locale fix — superseded by PR #11746

PR #11746 (feat: use CmdRunner for LVM commands) takes priority and
will handle lvg_rename.py via CmdRunner refactor. Removing our
run_command_environ_update change to avoid conflict.



---------


(cherry picked from commit b40608a39d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:29:14 +02:00
patchback[bot]
3ef5b36066 [PR #11773/8fbb43e6 backport][stable-12] Ensure standard locale in run_command (group5-batch2) (#11806)
Ensure standard locale in run_command (group5-batch2) (#11773)

* ensure standard locale in run_command (group5-batch2)

Adds ``LANGUAGE=C`` and ``LC_ALL=C`` to ``run_command()`` calls in modules
that parse command output, to prevent locale-dependent parsing failures on
non-C-locale systems.

Modules updated: cronvar, dnf_versionlock, dpkg_divert, flatpak_remote, hg.



* add changelog fragment for group5-batch2



---------


(cherry picked from commit 8fbb43e660)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:29:04 +02:00
patchback[bot]
150f63b36b [PR #11779/d6909578 backport][stable-12] Ensure standard locale in run_command (group5-batch8) (#11804)
Ensure standard locale in run_command (group5-batch8) (#11779)

* Fix locale env vars in run_command() calls for group5 batch8 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in lxc_container, ip_netns,
and capabilities.



* Add changelog fragment for PR #11779



---------


(cherry picked from commit d6909578b9)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:28:52 +02:00
patchback[bot]
582807ea3a [PR #11780/5c6a5999 backport][stable-12] Ensure standard locale in run_command (group5-batch9) (#11803)
Ensure standard locale in run_command (group5-batch9) (#11780)

* Fix locale env vars in run_command() calls for group5 batch9 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in beadm, pkg5, pkg5_publisher,
and swdepot.



* Add changelog fragment for PR #11780



---------


(cherry picked from commit 5c6a599940)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:28:44 +02:00
patchback[bot]
bdcd10ca2c [PR #11781/5f0a9bba backport][stable-12] Ensure standard locale in run_command (group5-batch10) (#11802)
Ensure standard locale in run_command (group5-batch10) (#11781)

* Fix locale env vars in run_command() calls for group5 batch10 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in imgadm, smartos_image_info,
syspatch, portage, portinstall, xbps, and lbu.



* Add changelog fragment for PR #11781



---------


(cherry picked from commit 5f0a9bba01)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:28:35 +02:00
patchback[bot]
42e88c56b6 [PR #11782/fe9e7284 backport][stable-12] Ensure standard locale in run_command (group5-batch11) (#11801)
Ensure standard locale in run_command (group5-batch11) (#11782)

* Fix locale env vars in run_command() calls for group5 batch11 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in apt_repo, easy_install, pear,
and zypper_repository_info.



* Add changelog fragment for PR #11782



---------


(cherry picked from commit fe9e728401)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:28:26 +02:00
patchback[bot]
d858cfa8a8 [PR #11783/9cadc947 backport][stable-12] Ensure standard locale in run_command (group5-batch12) (#11800)
Ensure standard locale in run_command (group5-batch12) (#11783)

* Fix locale env vars in run_command() calls for group5 batch12 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in bower, bundler, homebrew_tap,
and kibana_plugin.



* Add changelog fragment for PR #11783



---------


(cherry picked from commit 9cadc94793)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:28:19 +02:00
patchback[bot]
15d5670065 [PR #11784/3f7ae199 backport][stable-12] Ensure standard locale in run_command (group5-batch13) (#11799)
Ensure standard locale in run_command (group5-batch13) (#11784)

* Fix locale env vars in run_command() calls for group5 batch13 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in awall, openwrt_init, and
pip_package_info.



* Add changelog fragment for PR #11784



---------


(cherry picked from commit 3f7ae1999e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:28:12 +02:00
patchback[bot]
e5f55149c6 [PR #11785/269a5ed8 backport][stable-12] Ensure standard locale in run_command (group5-batch14) (#11797)
Ensure standard locale in run_command (group5-batch14) (#11785)

* Fix locale env vars in run_command() calls for group5 batch14 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in bzr, lldp, and ohai.



* Add changelog fragment for PR #11785



---------


(cherry picked from commit 269a5ed85e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:28:00 +02:00
patchback[bot]
50d22c9f70 [PR #11786/37653bc7 backport][stable-12] Ensure standard locale in run_command (group5-batch15) (#11796)
Ensure standard locale in run_command (group5-batch15) (#11786)

* Fix locale env vars in run_command() calls for group5 batch15 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in keyring_info, onepassword_info,
and riak.



* Add changelog fragment for PR #11786



---------


(cherry picked from commit 37653bc7f9)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:27:47 +02:00
patchback[bot]
671ce86565 [PR #11787/95e2b771 backport][stable-12] Ensure standard locale in run_command (group5-batch16) (#11798)
Ensure standard locale in run_command (group5-batch16) (#11787)

* Fix locale env vars in run_command() calls for group5 batch16 (btrfs module_utils)



* Add changelog fragment for PR #11787



---------


(cherry picked from commit 95e2b7716a)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:27:35 +02:00
patchback[bot]
0fc99ae2d8 [PR #11774/6d5644ac backport][stable-12] Ensure standard locale in run_command (group5-batch3) (#11807)
Ensure standard locale in run_command (group5-batch3) (#11774)

* Fix locale env vars in run_command() calls for group5 batch3 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in homectl, java_cert, keyring,
launchd, and listen_ports_facts.



* Add changelog fragment for PR #11774



---------


(cherry picked from commit 6d5644ac34)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:27:28 +02:00
patchback[bot]
dfa9f77b7a [PR #11775/7c52f1c4 backport][stable-12] Ensure standard locale in run_command (group5-batch4) (#11808)
Ensure standard locale in run_command (group5-batch4) (#11775)

* Fix locale env vars in run_command() calls for group5 batch4 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in logstash_plugin, lvg, mas,
osx_defaults, and pkgutil.



* Add changelog fragment for PR #11775



---------


(cherry picked from commit 7c52f1c41d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:27:16 +02:00
patchback[bot]
cfa712f30e [PR #11776/e45e6cbb backport][stable-12] Ensure standard locale in run_command (group5-batch5) (#11809)
Ensure standard locale in run_command (group5-batch5) (#11776)

* Fix locale env vars in run_command() calls for group5 batch5 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in pnpm, sysrc, timezone, xattr,
and yarn.



* Add changelog fragment for PR #11776



---------


(cherry picked from commit e45e6cbb5d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:27:07 +02:00
patchback[bot]
c65a675a52 [PR #11777/a9d6bb2a backport][stable-12] Ensure standard locale in run_command (group5-batch6) (#11810)
Ensure standard locale in run_command (group5-batch6) (#11777)

* Fix locale env vars in run_command() calls for group5 batch6 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in yum_versionlock and zypper_repository.



* Add changelog fragment for PR #11777



---------


(cherry picked from commit a9d6bb2a15)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:27:00 +02:00
patchback[bot]
e757ff30b3 [PR #11778/42a1998b backport][stable-12] Ensure standard locale in run_command (group5-batch7) (#11805)
Ensure standard locale in run_command (group5-batch7) (#11778)

* Fix locale env vars in run_command() calls for group5 batch7 modules

Set LANGUAGE=C and LC_ALL=C via run_command_environ_update to ensure
locale-independent output parsing in zfs, zfs_delegate_admin,
zfs_facts, and zpool_facts.



* Add changelog fragment for PR #11778



---------


(cherry picked from commit 42a1998bde)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 22:26:47 +02:00
patchback[bot]
d74e553f4b [PR #11767/f4f2bfe8 backport][stable-12] openbsd_pkg, sorcery: ensure standard locale in run_command (group4-batch2) (#11793)
openbsd_pkg, sorcery: ensure standard locale in run_command (group4-batch2) (#11767)

* ensure standard locale in run_command (group4-batch2)

Adds ``LANGUAGE=C`` and ``LC_ALL=C`` to the ``environ_update`` passed to
``run_command()`` calls in modules that parse command output, to prevent
locale-dependent parsing failures on non-C-locale systems.

Modules updated: openbsd_pkg, sorcery.



* add changelog fragment for group4-batch2



* add changelog fragment for group4-batch2



---------


(cherry picked from commit f4f2bfe847)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 13:43:30 +02:00
patchback[bot]
d5a759b2e3 [PR #11765/2297a5c8 backport][stable-12] Ensure standard locale in run_command (group4-batch1) (#11792)
Ensure standard locale in run_command (group4-batch1) (#11765)

* ensure standard locale in run_command (group4)

Adds ``LANGUAGE=C`` and ``LC_ALL=C`` to the ``environ_update`` passed to
``run_command()`` calls in modules that parse command output, to prevent
locale-dependent parsing failures on non-C-locale systems.

Modules updated: dconf, pkgng, terraform.



* add changelog fragment for group4



* add PR link to group4 changelog fragment



* fix changelog fragment: rename with PR prefix, fix URL order



---------


(cherry picked from commit 2297a5c876)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 13:38:43 +02:00
patchback[bot]
c8f2219fb0 [PR #11733/6f12d930 backport][stable-12] gem: use CmdRunner (#11791)
gem: use `CmdRunner` (#11733)

* gem: use `CmdRunner`

* add changelog frag

* gem: restore get_rubygems_path() helper to preserve executable splitting



---------


(cherry picked from commit 6f12d93057)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-12 13:38:08 +02:00
patchback[bot]
8fe227d456 [PR #11487/5eaa22b0 backport][stable-12] ipa_host: fix errors when disabling host (#11789)
ipa_host: fix errors when disabling host (#11487)

* fix errors when disabling host

- Fix the logic to actually allow disabling hosts
- Fix the dict != string error when error does happen
- Add has_keytab to returned dicts to allow users see if host is disabled or not

* Add changelog-fragments

* Run formatters

* More formatting

* Remove feature, only fix the logic

* Update changelogs/fragments/11487-ipa-host-fix-disable.yml



* Update changelogs/fragments/11487-ipa-host-fix-disable.yml



* Back to fstring

* Update plugins/modules/ipa_host.py



* Use more Pythonic way to for if

* Nox

* Revert back to working if

* Simplify if

* Remove extra get

---------




(cherry picked from commit 5eaa22b067)

Co-authored-by: quasd <quasd@users.noreply.github.com>
Co-authored-by: quasd <1747330+quasd@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-04-12 13:37:56 +02:00
patchback[bot]
59fe80ef94 [PR #11712/bd7b361d backport][stable-12] nsupdate: fix GSS-TSIG support (#11790)
nsupdate: fix GSS-TSIG support (#11712)

The fix for missing keyring initialization without TSIG auth in
PR #11461 put the initialization of "self.keyring" and "self.keyname"
in an else clause after checking if "key_name" is set.

The problem is that for "key_algorithm" == "gss-tsig":
a) "key_name" isn't set
b) self.keyring and self.keyname have already been initialized and
   will be discarded

This means that gss-tsig support is broken. Fix it by moving the
initialization of "self.keyring" and "self.keyname" to the top.

(cherry picked from commit bd7b361db1)

Co-authored-by: David Härdeman <david@hardeman.nu>
2026-04-12 13:37:47 +02:00
patchback[bot]
d9a2fa9bd9 [PR #11753/c7deda2e backport][stable-12] java_cert: support proxy authentication from https_proxy env var (#11761)
java_cert: support proxy authentication from https_proxy env var (#11753)

* java_cert: support proxy authentication from https_proxy env var

When https_proxy is set with credentials (USER:PASSWORD@HOST:PORT),
pass the corresponding JVM proxy auth flags to keytool and clear the
JDK 8u111+ Basic auth tunneling restriction.

Fixes https://github.com/ansible-collections/community.general/issues/4126



* java_cert: add changelog fragment for PR #11753



* java_cert: fix changelog fragment type to minor_changes



---------


(cherry picked from commit c7deda2ec7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-09 06:35:16 +02:00
Felix Fontein
a046ae812e Prepare 12.6.0. 2026-04-08 20:13:58 +02:00
patchback[bot]
953d70611b [PR #11754/b780224d backport][stable-12] mssql_script: only pass params to cursor.execute() when provided (#11758)
mssql_script: only pass params to cursor.execute() when provided (#11754)

* mssql_script: only pass params to cursor.execute() when provided

Fixes #11699



* mssql_script: add changelog fragment for PR #11754



---------


(cherry picked from commit b780224d6d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-08 20:06:07 +02:00
patchback[bot]
04367d8b9c [PR #11742/bdd31745 backport][stable-12] nmcli: use get_best_parsable_locale() to support UTF-8 connection names (#11757)
nmcli: use get_best_parsable_locale() to support UTF-8 connection names (#11742)

* nmcli: start locale fix - normalize run_command environ to LANGUAGE=C, LC_ALL=C

Work in progress - issue #10384 (UTF-8 conn_name support) requires deeper
investigation beyond simple locale variable normalization.



* nmcli: use get_best_parsable_locale() to support UTF-8 connection names

Fixes issue where UTF-8 connection names (e.g. Chinese characters) were
corrupted to '????' when LC_ALL=C forced ASCII encoding, causing
connection_exists() to always return False for non-ASCII names.



* add changelog fragment for PR #11742



---------


(cherry picked from commit bdd3174563)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-08 15:46:30 +02:00
patchback[bot]
e5f9516335 [PR #11741/e59888dd backport][stable-12] Ensure standard locale in run_command (group3-batch3) (#11756)
Ensure standard locale in run_command (group3-batch3) (#11741)

* run_command locale group3 batch3: normalise to LANGUAGE=C, LC_ALL=C



* fix changelog fragment: bugfixes, American English, separate code spans



* fix changelog fragment: correct PR number (11741)



---------


(cherry picked from commit e59888dd7e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-08 15:34:34 +02:00
patchback[bot]
b4f09831b0 [PR #11740/936ab2ea backport][stable-12] Ensure standard locale in run_command (group3-batch2) (#11755)
Ensure standard locale in run_command (group3-batch2) (#11740)

* run_command locale group3 batch2: normalise to LANGUAGE=C, LC_ALL=C



* fix changelog fragment: bugfixes, American English, separate code spans



* fix changelog fragment: correct PR number (11740)



* remove nmcli from batch2 - moved to dedicated branch



---------


(cherry picked from commit 936ab2ea56)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-08 15:34:23 +02:00
patchback[bot]
5f5043d4b8 [PR #11743/849a7ee8 backport][stable-12] Add stable-2.21 to CI (#11745)
Add stable-2.21 to CI (#11743)

Add stable-2.21 to CI.

(cherry picked from commit 849a7ee899)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-06 22:13:07 +02:00
patchback[bot]
ea465e21c3 [PR #11738/c90b5046 backport][stable-12] Ensure standard locale in run_command (group3-batch1) (#11739)
Ensure standard locale in run_command (group3-batch1) (#11738)

* ensure standard locale in run_command (group3-batch1)

* add changelog frag

* fix changelog fragment: bugfixes, not minor_changes



* fix changelog fragment: American English, separate code spans per variable



---------


(cherry picked from commit c90b504626)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-06 10:05:05 +02:00
patchback[bot]
f00f70f849 [PR #11717/b2cd1b55 backport][stable-12] Fix KeyError for 'dnsttl' (#11736)
Fix KeyError for 'dnsttl' (#11717)

* Fix KeyError for 'dnsttl'

I did not further dig into the code. However, since upgrading to the latest version of `community.general`, ansible fails with a weird error message "dnsttl" at a task where `community.general.ipa_dnsrecord` is called. After digging into the code a bit, I found out that it is a KeyError and caused by this line of code. I'm not sure, if it is safe to skip that line and not to set `result["dnsttl"]`.

* Add changelog fragment

* Adopt suggestion for changelogs/fragments/11717-fix-error-dnsttl.yml



---------


(cherry picked from commit b2cd1b555e)

Co-authored-by: sedrubal <sedrubal@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-04 18:44:05 +00:00
patchback[bot]
6020893160 [PR #11729/bdb82c72 backport][stable-12] chore: devcontainer/pre-commit (#11732)
chore: devcontainer/pre-commit (#11729)

(cherry picked from commit bdb82c7248)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-04-04 11:00:27 +02:00
patchback[bot]
1eecf281bc [PR #11728/2acb20be backport][stable-12] opendj_backendprop: use CmdRunner (#11731)
opendj_backendprop: use CmdRunner (#11728)

* opendj_backendprop: use CmdRunner

* add changelog frag

(cherry picked from commit 2acb20bec2)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-04-03 15:21:16 +02:00
patchback[bot]
fa9ac2b3a9 [PR #11719/66886d08 backport][stable-12] integration tests: remove CentOS conditionals - part 2 (#11730)
integration tests: remove CentOS conditionals - part 2 (#11719)

* test(integration): remove CentOS references - part 2

* adjustments from review

(cherry picked from commit 66886d08f5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-04-03 12:34:15 +00:00
patchback[bot]
f77d731faf [PR #11715/79431c36 backport][stable-12] integration tests: remove CentOS conditionals (#11726)
integration tests: remove CentOS conditionals (#11715)

* test(integration): remove CentOS references

* further simplification

* more removals

* rollback systemd_info for now

* ufw: not trivially used with RHEL9 and RHEL10, simplifying tests

* remove tasks for setup_epel where unused

* adjustments from review

(cherry picked from commit 79431c36b5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-04-03 10:41:10 +02:00
patchback[bot]
56bcb0c32b [PR #11697/8b114e99 backport][stable-12] consul integration tests: re-enable on macOS (#11727)
consul integration tests: re-enable on macOS (#11697)

* consul integration tests: re-enable on macOS

- Update consul version to 1.22.6
- Add arm64/aarch64 architecture support
- Fix macOS Gatekeeper quarantine on downloaded binary
- Add wait_for before ACL bootstrap (race condition fix)
- Update HCL config to use tls stanza (required in 1.22)
- Disable gRPC port (conflicts with tls stanza when not configured)
- Remove skip/macos from aliases

Fixes: https://github.com/ansible-collections/community.general/issues/1016



* changelogs/fragments: add PR number for consul tests fix



* remove changelog fragment (test-only PR)



---------


(cherry picked from commit 8b114e999e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-03 08:03:09 +02:00
patchback[bot]
83aa142331 [PR #11682/b79a4575 backport][stable-12] snap_connect: new module to manage snap interface connections (#11722)
snap_connect: new module to manage snap interface connections (#11682)

* snap_connect: new module to manage snap interface connections

Fixes #7722



* simplify _get_connections()

---------


(cherry picked from commit b79a45753f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-04-03 07:35:13 +02:00
patchback[bot]
cac85a5480 [PR #11720/982f9472 backport][stable-12] test(integration): fix for ansible-core devel changes in register (#11724)
test(integration): fix for ansible-core devel changes in register (#11720)

(cherry picked from commit 982f9472c5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-04-03 07:35:02 +02:00
patchback[bot]
3bca7e1ad4 [PR #11721/08442186 backport][stable-12] xenserver_guest: fix code style caught by codeqa (#11725)
xenserver_guest: fix code style caught by codeqa (#11721)

* xenserver_guest: fix code style caught by codeqa

* add changelog frag

(cherry picked from commit 08442186e6)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-04-03 07:34:32 +02:00
patchback[bot]
b41916285e [PR #11701/d956fb81 backport][stable-12] jira - add cloud option to support Jira Cloud search endpoint (#11716)
jira - add cloud option to support Jira Cloud search endpoint (#11701)

* jira - add cloud option to support Jira Cloud search endpoint

Jira Cloud has removed the legacy GET /rest/api/2/search endpoint
(see https://developer.atlassian.com/changelog/#CHANGE-2046).

Add a new boolean `cloud` option (default false). When set to true,
the search operation uses the replacement /rest/api/2/search/jql
endpoint. The default remains false to preserve backward compatibility
for Jira Data Center / Server users.

Fixes: https://github.com/ansible-collections/community.general/issues/10786

Assisted-by AI: Claude 4.6 Opus (Anthropic) via Cursor IDE



* Adding PR link to changelogs/fragments/10786-jira-cloud-search.yml



* Adding note about future usage of cloud parameter



---------



(cherry picked from commit d956fb8197)

Signed-off-by: Vladimir Vasilev <vvasilev@redhat.com>
Co-authored-by: vladi-k <53343355+vladi-k@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-04-01 07:06:25 +02:00
patchback[bot]
1df4b3ee74 [PR #11690/f4e5fc09 backport][stable-12] monit: re-enable tests in RHEL (#11714)
monit: re-enable tests in RHEL (#11690)

* re-enable monit tests in rhel

* enable EPEL for RHEL<11

* rollback EPEL setup, skip only specific versions

* remove skip entirely

* change download URL in setup_epel, adjusted code to use it

* claude tries to install virtualenv, round 1

* claude tries python3 -m venv instead

* remove outdated centos6 file

(cherry picked from commit f4e5fc09d7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-04-01 07:06:16 +02:00
patchback[bot]
ae089aefad [PR #11688/85685944 backport][stable-12] flatpak: fix removal of runtimes (#11713)
flatpak: fix removal of runtimes (#11688)

* flatpak: fix removal of runtimes (issue #553)

The module was using `--app` when listing installed flatpaks for name
matching, which excluded runtimes from the results. This caused removal
of runtimes to fail even though `flatpak_exists()` correctly detected
them as installed (it lists both apps and runtimes).

Fix by dropping `--app` from the three matching functions so that both
apps and runtimes are searchable.



* flatpak: add changelog fragment for PR #11688



---------


(cherry picked from commit 8568594453)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-31 08:06:46 +02:00
patchback[bot]
f92fedcfa0 [PR #11683/5a27cbde backport][stable-12] snmp_facts: update to pysnmp >= 7.1 async API (#11711)
snmp_facts: update to pysnmp >= 7.1 async API (#11683)

* snmp_facts: update to pysnmp >= 7.1 async API

Migrate snmp_facts module from the removed pysnmp oneliner API
(pysnmp.entity.rfc3413.oneliner.cmdgen) to the current async API
(pysnmp.hlapi.v3arch.asyncio).

This fixes compatibility with Python 3.12+ and pysnmp >= 7.1.

Closes #8852

* Continue to support pysnmp 6.2.4

* Correct PR number

* sort imports

* shorter changelog

* move `SNMP_DEFAULT_PORT`

* Add `notes:`

* Become an author

* use `deps.declare`

* add lalten to BOTMETA

(cherry picked from commit 5a27cbdec6)

Co-authored-by: Laurenz <lalten@users.noreply.github.com>
2026-03-30 22:33:30 +02:00
patchback[bot]
9d269ee8ca [PR #11698/47ef322a backport][stable-12] ipa module utils: detect and fail on errors in API response failed field (#11710)
ipa module utils: detect and fail on errors in API response `failed` field (#11698)

* ipa_* modules: detect and fail on errors in API response ``failed`` field

Fixes: https://github.com/ansible-collections/community.general/issues/1239



* fix chglog frag

* adjust chglog frag

---------


(cherry picked from commit 47ef322a5f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-30 21:43:39 +02:00
patchback[bot]
66d394dc81 [PR #11686/68ae04a9 backport][stable-12] Cleanup of aliases skip statements (#11709)
Cleanup of `aliases` skip statements (#11686)

* add scripts to clean aliases' skips

* remove legacy skips

* code cosmetics

* add license to ALIASES.md

* Fix typos in ALIASES.md documentation

* rolling back freebsd14.2 and 14.3 in iso_extract

* fix versions and re-run

(cherry picked from commit 68ae04a95a)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-30 19:58:13 +02:00
patchback[bot]
de180d01e0 [PR #11689/a4bba992 backport][stable-12] composer - make create-project idempotent, add force parameter (#11700)
composer - make `create-project` idempotent, add `force` parameter (#11689)

* composer - make create-project idempotent, add force parameter

Adds a check for an existing composer.json in working_dir before running
create-project, so the task is skipped rather than failing on second run.
A new force parameter allows bypassing this check when needed.

Fixes #725.



* changelog fragment: rename to PR number, add PR URL



---------


(cherry picked from commit a4bba99203)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-27 21:34:35 +01:00
patchback[bot]
1a1056099c [PR #11685/909458a6 backport][stable-12] docs: improve timezone module examples and add hwclock usage (#11696)
docs: improve timezone module examples and add hwclock usage (#11685)

* docs: add variable-based example for timezone module

### Summary
Added a variable-based example to the EXAMPLES section of the timezone module.

### Changes
- Added an example demonstrating how to set timezone dynamically using a variable

### Motivation
Using variables is a common practice in Ansible playbooks. This example helps users understand how to make the module usage more flexible and reusable.

* docs: improve timezone module examples with hwclock usage

### Summary
Improved the EXAMPLES section of the timezone module by adding a more meaningful, module-specific example.

### Changes
- Added an example demonstrating usage of the `hwclock` parameter
- Simplified examples to avoid redundancy
- Fixed formatting issues causing CI failures (invalid YAML, lint errors)

### Motivation
The previous examples were minimal and did not demonstrate module-specific features. This update adds a more practical use case and ensures the examples follow proper formatting and validation rules.

(cherry picked from commit 909458a661)

Co-authored-by: Anshjeet Mahir <anshjeetmahir123@gmail.com>
2026-03-27 12:45:03 +01:00
patchback[bot]
300f525ff9 [PR #11673/12af50cf backport][stable-12] docs: add Execution Environment guide (#11693)
docs: add Execution Environment guide (#11673)

* docs: add Execution Environment guide

Closes #2968
Closes #4512



* add to botmeta

* fix code block language

* Apply suggestions from code review



* Update section title for community.general EE metadata

* Apply suggestion from felixfontein



* Remove extraneous paragraph

* Apply suggestions from code review




* remove link to legacy documentation

* Update docs/docsite/rst/guide_ee.rst



---------




(cherry picked from commit 12af50cfb7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Don Naro <dnaro@redhat.com>
2026-03-26 22:20:21 +01:00
patchback[bot]
03a639e809 [PR #11677/ef700b11 backport][stable-12] nsupdate: add unit tests (#11692)
nsupdate: add unit tests (#11677)

* nsupdate: add unit tests



* fix var name to regain sanity

* remove unneeded typing from test file

* formatting

---------


(cherry picked from commit ef700b116a)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-26 21:47:56 +01:00
patchback[bot]
f739035d1f [PR #11681/e2c06f2d backport][stable-12] pacman: add root, cachedir, and config options (#11684)
pacman: add root, cachedir, and config options (#11681)

* pacman: add root, cachedir, and config options

Add three dedicated options -- O(root), O(cachedir), and O(config) --
so that all pacman commands get the corresponding global flags
(--root, --cachedir, --config) prepended, enabling use cases such as
installing packages into a chroot or alternative root directory
(similar to pacstrap).



* add changelog frag

---------


(cherry picked from commit e2c06f2d12)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-26 06:47:20 +01:00
patchback[bot]
17e02f87c9 [PR #11678/d06c83eb backport][stable-12] etcd3: re-enable and fix tests, add unit tests (#11680)
etcd3: re-enable and fix tests, add unit tests (#11678)

* etcd3: re-enable and fix tests, add unit tests

- Add unit tests for community.general.etcd3 module (12 tests covering
  state=present/absent, idempotency, check mode, and error paths)
- Fix integration test setup: update etcd binary to v3.6.9 (from v3.2.14),
  download from GitHub releases, add health-check retry loop after start
- Work around etcd3 Python library incompatibility with protobuf >= 4.x
  by setting PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python
- Update to FQCNs throughout integration tests
- Re-enable both etcd3 and lookup_etcd3 integration targets

Fixes https://github.com/ansible-collections/community.general/issues/322



* improve use of multiple context managers

---------


(cherry picked from commit d06c83eb68)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
2026-03-25 07:05:05 +01:00
patchback[bot]
becbd2d80f [PR #11674/cc59f7eb backport][stable-12] botmeta: fix sorting (#11676)
botmeta: fix sorting (#11674)

(cherry picked from commit cc59f7ebeb)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-24 22:12:26 +01:00
patchback[bot]
aa352ccf45 [PR #11664/d48a0668 backport][stable-12] mssql_*: named instances (#11672)
mssql_*: named instances (#11664)

* mssql_*: named instances

* add changelog frag

* fix changelog

* Update plugins/modules/mssql_db.py

* Update plugins/modules/mssql_db.py

* Update plugins/modules/mssql_script.py

* Update plugins/modules/mssql_script.py

* fix backslashes

* Update plugins/modules/mssql_db.py



* Update plugins/modules/mssql_script.py



---------


(cherry picked from commit d48a066821)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-24 07:00:36 +01:00
Felix Fontein
e5b01dfc01 The next expected release is 12.6.0. 2026-03-23 21:47:25 +01:00
Felix Fontein
a5d830bbf4 Release 12.5.0. 2026-03-23 21:24:48 +01:00
patchback[bot]
02b25fb096 [PR #11655/6d3ab1a8 backport][stable-12] passwordstore lookup: update code meant for Python2 (#11669)
passwordstore lookup: update code meant for Python2 (#11655)

* passwordstore lookup: update code meant for Python2

* add changelog frag

* add check param to subprocess.run() to reinstate sanity

(cherry picked from commit 6d3ab1a80c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-23 20:35:20 +01:00
patchback[bot]
555d7b9038 [PR #11658/25a4f568 backport][stable-12] puppet: deprecate param timeout (#11665)
puppet: deprecate param timeout (#11658)

* puppet: deprecate param timeout

* add changelog frag

* Update changelogs/fragments/11658-puppet-timeout-deprecation.yml



---------


(cherry picked from commit 25a4f568f9)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-23 20:35:09 +01:00
patchback[bot]
4dfe6816a8 [PR #11622/7c039918 backport][stable-12] keycloak_realm: Add support for setting first broker login flow (#11670)
keycloak_realm: Add support for setting first broker login flow (#11622)

* keycloak_realm: Add support for setting first broker login flow

* Update plugins/modules/keycloak_realm.py



* Add changelog fragment

---------


(cherry picked from commit 7c039918e0)

Co-authored-by: Nils Bergmann <Nils1794@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-23 20:34:51 +01:00
patchback[bot]
a12ac59223 [PR #11656/4dad53ab backport][stable-12] counter_enabled callback: honor display_ok_hosts setting (#11667)
counter_enabled callback: honor display_ok_hosts setting (#11656)

* fix(callback/counter_enabled): honor display_ok_hosts setting

* add changelog frag

* Update changelogs/fragments/11656-counter_enabled-display_ok_hosts.yml

(cherry picked from commit 4dad53abac)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-23 20:34:43 +01:00
patchback[bot]
9d7097ef4d [PR #11635/3c21ac96 backport][stable-12] nmcli: fix setting_types() to properly handle routing_rules as a list type (#11668)
nmcli: fix setting_types() to properly handle routing_rules as a list type (#11635)

* Fix setting_types() to properly handle routing_rules as a list type

* Add changelog fragment for ipv6.routing-rules bugfix

* Update changelogs/fragments/11630-nmcli-ipv6-routing-rules.yml



* Add PR URL to changelog fragment

---------


(cherry picked from commit 3c21ac961b)

Co-authored-by: Ted W. <ted.l.wood@gmail.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-23 20:34:36 +01:00
patchback[bot]
8e4581c0e6 [PR #11659/d6cb56c0 backport][stable-12] osx_defaults: add dict support (#11671)
osx_defaults: add dict support (#11659)

* osx_defaults: add dict support

* add changelog frag

* osx_defaults: fix dict idempotency by using plutil -extract for type-preserving read

The previous approach piped `defaults read` output (old-style plist text)
through `plutil -convert json`. Old-style plist loses boolean type info
(booleans appear as 1/0, indistinguishable from integers), causing the
comparison to fail and reporting changed=True on every run.

Fix by exporting the domain binary plist to a temp file and using
`plutil -extract key json` which correctly preserves all plist types
(booleans stay true/false, integers stay integers, etc.).



* change param from bool to str

* Apply suggestion from review

* Update plugins/modules/osx_defaults.py



---------



(cherry picked from commit d6cb56c022)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-23 20:34:28 +01:00
patchback[bot]
28b50a1e45 [PR #11657/d48e767e backport][stable-12] open_iscsi: support IPv6 portals (#11663)
open_iscsi: support IPv6 portals (#11657)

* fix(modules/open_iscsi): support IPv6 portals

* add changelog frag

(cherry picked from commit d48e767e1e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-23 07:04:44 +01:00
patchback[bot]
414f0541a5 [PR #11654/b85a1687 backport][stable-12] test: remove redundant unit test requirements (#11662)
test: remove redundant unit test requirements (#11654)

(cherry picked from commit b85a168716)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-23 07:04:31 +01:00
patchback[bot]
c4da6e4202 [PR #11660/b1ac989c backport][stable-12] remove skip/aix from aliases files (#11661)
remove skip/aix from aliases files (#11660)

(cherry picked from commit b1ac989c70)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-23 07:04:20 +01:00
Felix Fontein
05f3052937 Prepare 12.5.0. 2026-03-22 20:35:05 +01:00
patchback[bot]
919d27676c [PR #11632/69b9a3f8 backport][stable-12] supervisorctl: skip no such process for all (#11652)
supervisorctl: skip no such process for all (#11632)

* feat(supervisorctl): skip no such process for all

Do not fail, if there are no matching processes for name=all

* feat(supervisorctl): add changelog

* Update 11621-skip-no_such_process-for-name-all.yml



* fix(supervisorctl): replace single quotes to double

---------



(cherry picked from commit 69b9a3f8e2)

Co-authored-by: zr0dy <58261587+zr0dy@users.noreply.github.com>
Co-authored-by: zr0dy <zr0dy@mail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-22 20:33:58 +01:00
patchback[bot]
a425d16e7c [PR #11646/8d403dde backport][stable-12] ansible_galaxy_install: new param executable (#11651)
ansible_galaxy_install: new param executable (#11646)

* ansible_galaxy_install: new param executable

* add changelog frag

(cherry picked from commit 8d403dde5b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-22 20:33:41 +01:00
patchback[bot]
12808f67d5 [PR #11645/a09e879f backport][stable-12] xfconf: fix boolean return values (#11650)
xfconf: fix boolean return values (#11645)

* xfconf: fix boolean return values

* add changelog frag

(cherry picked from commit a09e879ff2)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-22 20:33:31 +01:00
patchback[bot]
268b31b53d [PR #11639/758a445d backport][stable-12] npm: use uthelper for tests (#11644)
npm: use uthelper for tests (#11639)

(cherry picked from commit 758a445d97)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-22 11:17:30 +01:00
patchback[bot]
fa682e8b40 [PR #11638/4f5e5c9b backport][stable-12] python_runner: add integration tests (#11643)
test(python_runner): add integration tests (#11638)

* test(python_runner): add integration tests

* simplify the test

* add missing quotes

* use setup_remote_tmp_dir

* build venv manually first

(cherry picked from commit 4f5e5c9bb6)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-22 11:17:12 +01:00
patchback[bot]
0f7d508344 [PR #11637/3aa4a298 backport][stable-12] cmd_runner_fmt tests: assert that unpack_* functions can handle _ArgFormat objects (#11642)
test(cmd_runner_fmt): assert that `unpack_*` functions can handle `_ArgFormat` objects (#11637)

test(cmd_runner_fmt): assert that unpack functions can handle _ArgFormat objects

(cherry picked from commit 3aa4a29842)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-22 11:13:50 +01:00
patchback[bot]
ac771079db [PR #11636/1dfc4fed backport][stable-12] test: uthelper now generates one test function per test case (#11641)
test: uthelper now generates one test function per test case (#11636)

(cherry picked from commit 1dfc4fed40)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-22 11:05:28 +01:00
patchback[bot]
96852b7032 [PR #11631/b4336659 backport][stable-12] CI: Remove FreeBSD 14.3 for devel, and replace macOS 15.3 with 26.3 (#11634)
CI: Remove FreeBSD 14.3 for devel, and replace macOS 15.3 with 26.3 (#11631)

* Replace FreeBSD 14.3 with 14.4, and macOS 15.3 with 26.3.

* FreeBSD 14.4 seems to have the same problem as FreeBSD 15.0, disabling for now.

(cherry picked from commit b4336659f6)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-21 21:13:57 +01:00
patchback[bot]
08bb917d59 [PR #11625/bc22fbca backport][stable-12] CI: Replace apt_repository and apt_key with deb822_repository (#11627)
CI: Replace apt_repository and apt_key with deb822_repository (#11625)

Replace apt_repository and apt_key with deb822_repository.

(cherry picked from commit bc22fbcaa0)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-20 08:06:18 +01:00
patchback[bot]
e7e9cf97e5 [PR #11536/dae2157b backport][stable-12] merge_variables: extended merge capabilities added (#11626)
merge_variables: extended merge capabilities added (#11536)

* merge_variables: extended merge capabilities added

This extension gives you more control over the variable merging process of the lookup plugin `merge_variables`. It closes the gap between Puppet's Hiera merging capabilities and the limitations of Ansible's default variable plugin `host_group_vars` regarding fragment-based value definition. You can now decide which merge strategy should be applied to dicts, lists, and other types. Furthermore, you can specify a merge strategy that should be applied in case of type conflicts.

The default behavior of the plugin has been preserved so that it is fully backward-compatible with the already implemented state.



* Update changelogs/fragments/11536-merge-variables-extended-merging-capabilities.yml



* Update plugins/lookup/merge_variables.py



* Periods added at the end of each choice description



* Update plugins/lookup/merge_variables.py



* ref: follow project standard for choice descriptions



* ref: more examples added and refactoring



* Update plugins/lookup/merge_variables.py



* ref: some more comments to examples added



* fix: unused import removed



* ref: re-add "merge" to strategy map



* Update comments



* Specification of transformations solely as string



* Comments updated



* ref: `append_rp` and `prepend_rp` removed
feat: options dict for list transformations re-added
feat: allow setting `keep` for dedup transformation with possible values: `first` (default) and `last`



* ref: improve options documentation



* ref: documentation improved, avoiding words like newer or older in merge description



* Update plugins/lookup/merge_variables.py



* ref: "prio" replaced by "dict"



* feat: two integration tests added



---------





(cherry picked from commit dae2157bb7)

Signed-off-by: Fiehe Christoph  <c.fiehe@eurodata.de>
Co-authored-by: Christoph Fiehe <cfiehe@users.noreply.github.com>
Co-authored-by: Fiehe Christoph <c.fiehe@eurodata.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Mark <40321020+m-a-r-k-e@users.noreply.github.com>
2026-03-19 22:59:56 +01:00
patchback[bot]
deb9d63783 [PR #11585/25b5655b backport][stable-12] keycloak_authentication_v2: verify providerIds (fix 11583) (#11619)
keycloak_authentication_v2: verify providerIds (fix 11583) (#11585)

* 11583 verify providerIds in keycloak_authentication_v2

* 11583 code cleanup

---------


(cherry picked from commit 25b5655be7)

Co-authored-by: thomasbargetz <thomas.bargetz@gmail.com>
Co-authored-by: Thomas Bargetz <thomas.bargetz@rise-world.com>
2026-03-18 18:14:37 +01:00
patchback[bot]
a882022280 [PR #11589/d8bb637c backport][stable-12] nictagadm: don't call is_valid_mac when etherstub is true (#11618)
nictagadm: don't call is_valid_mac when etherstub is true (#11589)

* nictagadm: don't call is_valid_mac when etherstub is true

* Add changelog fragment

* update changelog fragment

* Shorten changelog fragement

* Update changelogs/fragments/nictagadm-etherstub-nonetype-bugfix.yml



---------


(cherry picked from commit d8bb637cba)

Co-authored-by: Adam D <44533090+emptyDir@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-18 07:05:16 +01:00
patchback[bot]
f06bcabeed [PR #11601/e7a253b4 backport][stable-12] keycloak_authentication_v2: covers idp flow overrides in safe swap (fix 11582) (#11617)
keycloak_authentication_v2: covers idp flow overrides in safe swap (fix 11582) (#11601)

* 11582 keycloak_authentication_v2 covers idp flow overrides in safe swap

* 11583 update documentation and comments

(cherry picked from commit e7a253b4c9)

Co-authored-by: thomasbargetz <thomas.bargetz@gmail.com>
2026-03-18 07:05:10 +01:00
patchback[bot]
19462b72ca [PR #11612/5e4fbfee backport][stable-12] Update BOTMETA.yml (#11616)
Update BOTMETA.yml (#11612)

remove myself from teams

(cherry picked from commit 5e4fbfeee0)

Co-authored-by: Anatoly Pugachev <matorola@gmail.com>
2026-03-18 07:04:59 +01:00
patchback[bot]
a8bd4c750b [PR #11586/df9b3044 backport][stable-12] github_secrets_info: new module (#11610)
github_secrets_info: new module (#11586)

* github_secrets_info: new module



* clean tests



* remove pynacl dep



* fqcn



* remove excess output



* just return result as sample



* only print secrets, adapt tests



* Update plugins/modules/github_secrets_info.py



* Update plugins/modules/github_secrets_info.py



* Update plugins/modules/github_secrets_info.py



* t is for typing, and typing is what we did



* add info_module attributes



---------



(cherry picked from commit df9b30448a)

Signed-off-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>
Co-authored-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-16 20:28:22 +01:00
patchback[bot]
000b92a425 [PR #11254/cc24e573 backport][stable-12] monit: deprecate support for monit <= 5.18 (#11609)
monit: deprecate support for monit <= 5.18 (#11254)

* monit: deprecate support for monit <= 5.18

* add additional runs for checking version

* add changelog frag

* bump deprecation for 14.0.0

(cherry picked from commit cc24e57307)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-16 20:16:41 +01:00
patchback[bot]
7784fbdf17 [PR #11603/c8fe1e57 backport][stable-12] Fix typing imports (#11607)
Fix typing imports (#11603)

Fix typing imports.

(cherry picked from commit c8fe1e571f)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-15 19:53:36 +01:00
patchback[bot]
292bb400eb [PR #11605/f642dac9 backport][stable-12] sssd_info: fix attributes (#11606)
sssd_info: fix attributes (#11605)

Fix attributes.

(cherry picked from commit f642dac900)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-15 19:53:27 +01:00
patchback[bot]
c6ddff0dad [PR #11514/46ffec6f backport][stable-12] github_secrets: new module (#11602)
github_secrets: new module (#11514)

* add support for managing GitHub secrets



* fix tab



* update for sanity



* more sanity fixes



* update botmeta



* formating



* remove list function



* remove docstring, format text strings and return codes



* switch to deps



* black and ruff doesnt get along



* initial unit tests



* update non-existing secret test



* update description and details



* handle when a secret cant be deleted



* fail if not acceptable error codes



* add test for non-acceptable status codes



* remove local ruff config



* allow empty strings



* set required_



* extend tests



* cleanup



* cover all, got a git urlopen error



* cover all, got a git urlopen error



* ensure value cant be None



* check_mode



* bump to 12.5.0



* Update plugins/modules/github_secrets.py



* extend check_mode and related tests



* split constants and return dict when checking secret



* switch to HTTPStatus



* replace DELETE and UPDATE with NO_CONTENT



* Update plugins/modules/github_secrets.py



* Update plugins/modules/github_secrets.py



* update tests



* Update plugins/modules/github_secrets.py



* Update plugins/modules/github_secrets.py



* Update plugins/modules/github_secrets.py



* Update plugins/modules/github_secrets.py



* Update plugins/modules/github_secrets.py



---------



(cherry picked from commit 46ffec6f0e)

Signed-off-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>
Co-authored-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-15 16:38:30 +01:00
patchback[bot]
86616b1559 [PR #11592/2d685e7a backport][stable-12] test(monit): use uthelper (#11593)
test(monit): use uthelper (#11592)

(cherry picked from commit 2d685e7a85)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-14 22:34:22 +01:00
patchback[bot]
99ebbbdf49 [PR #11590/ce5d5622 backport][stable-12] replace list(map(...)) with comprehension (#11591)
replace `list(map(...))` with comprehension (#11590)

* replace `list(map(...))` with comprehension

* add changelog frag

(cherry picked from commit ce5d5622b9)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-14 17:14:18 +01:00
patchback[bot]
c853dfb1a8 [PR #11559/3194ed9d backport][stable-12] ipa_dnsrecord fix error when using dnsttl and nothing to change (#11587)
ipa_dnsrecord fix error when using dnsttl and nothing to change (#11559)

* ipa_dnsrecord fix error when using dnsttl and nothing to change

* Add changelog and bump version

* ipa_dnsrecord list comp in dnsrecord_find



* 11559 changelog fragment fix capitalization

* ipa_dnsrecord dnsrecord_find ttl transform to integer always

* ipa_dnsrecord dnsrecord_find method refactor

---------


(cherry picked from commit 3194ed9d36)

Co-authored-by: Dor Breger <75537576+DorBreger@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-13 21:14:48 +01:00
patchback[bot]
79d8c9bd6e [PR #11424/f0e3edc8 backport][stable-12] New module: logrotate (#11581)
New module: `logrotate` (#11424)

* add module logrotate

* add values for start

* fix docs

* version 12.5.0 and fix test

---------


(cherry picked from commit f0e3edc892)

Co-authored-by: Aleksandr Gabidullin <101321307+a-gabidullin@users.noreply.github.com>
Co-authored-by: Александр Габидуллин <agabidullin@astralinux.ru>
2026-03-13 08:01:39 +01:00
patchback[bot]
e631648ef6 [PR #11576/ccc974e2 backport][stable-12] Consolidate changelog fragments (#11580)
Consolidate changelog fragments (#11576)

Consolidate changelog fragments.

(cherry picked from commit ccc974e2fa)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-13 07:58:25 +01:00
patchback[bot]
5106aa8065 [PR #11557/a69f7e60 backport][stable-12] add module keycloak_authentication_v2 (#11579)
add module keycloak_authentication_v2 (#11557)

* add module keycloak_authentication_v2

* skip sanity checks, because the run into a recursion

* 11556 fix documentation

* 11556 limit the depth of nested flows to 4

* 11556 code cleanup

* 11556 code cleanup - add type hints

* 11556 add keycloak_authentication_v2 to meta/runtime.yml

* 11556 code cleanup - remove custom type hints

* 11556 code cleanup - none checks

* Update plugins/modules/keycloak_authentication_v2.py



* Update plugins/modules/keycloak_authentication_v2.py



* 11556 code cleanup - remove document starts

* 11556 cleanup

* 11556 cleanup

---------




(cherry picked from commit a69f7e60b4)

Co-authored-by: thomasbargetz <thomas.bargetz@gmail.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Thomas Bargetz <thomas.bargetz@rise-world.com>
2026-03-13 07:41:56 +01:00
patchback[bot]
25e35bdda7 [PR #11481/55dae7c2 backport][stable-12] doas: allow to explicitly enable pipelining (#11577)
doas: allow to explicitly enable pipelining (#11481)

* Allow to explicitly enable pipelining.

* Add markup.

(cherry picked from commit 55dae7c2a6)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-12 21:44:11 +01:00
patchback[bot]
74bc10b8fc [PR #11558/0e4783dc backport][stable-12] Binary attribute support for ldap_attrs and ldap_entry (#11578)
Binary attribute support for `ldap_attrs` and `ldap_entry` (#11558)

* Binary attribute support for `ldap_attrs` and `ldap_entry`

This commit implements binary attribute support for the `ldap_attrs` and
`ldap_entry` plugins. This used to be "supported" before, because it was
possible to simply load arbitrary binary data into the attributes, but
no longer functions on recent Ansible versions.

In order to support binary attributes, this commit introduces two new
options to both plugins:

  * `binary_attributes`, a list of attribute names which will be
    considered as being binary,
  * `honor_binary_option`, a flag which is true by default and will
    handle all attributes that include the binary option (see RFC 4522)
    as binary automatically.

When an attribute is determined to be binary through either of these
means, the plugin will assume that the attribute's value is in fact
base64-encoded. It will proceed to decode it and handle it accordingly.

While changes to `ldap_entry` are pretty straightforward, more work was
required on `ldap_attrs`.

  * First, because both `present` and `absent` state require checking
    the attribute's current values and normally do that using LDAP search
    queries for each value, a specific path for binary attributes was
    added that loads and caches all values for the attribute and compares
    the values in the Python code.
  * In addition, generating both the modlist and the diff output require
    re-encoding binary attributes' values into base64 so it can be
    transmitted back to Ansible.

* Various fixes on `ldap_attrs`/`ldap_entry` from PR 11558 discussion

* Rename `honor_binary_option` to `honor_binary`

* Add some general documentation about binary attributes

* Fix changelog fragment after renaming one of the new options

* Add examples of `honor_binary` and `binary_attributes`

* Add note that indicates that binary values are supported from 12.5.0+

* Fix punctuation

* Add links to RFC 4522 to `ldap_attrs` and `ldap_entry`

* Catch base64 decoding errors

* Rephrase changelog fragment

* Use f-string to format the encoding error message

(cherry picked from commit 0e4783dcc3)

Co-authored-by: Emmanuel Benoît <tseeker@nocternity.net>
2026-03-12 21:39:01 +01:00
patchback[bot]
7415220cad [PR #11573/f9e583da backport][stable-12] fix: remove HTTPStatus constructs introduced in Python 3.11 (#11575)
fix: remove HTTPStatus constructs introduced in Python 3.11 (#11573)

* fix: remove HTTPStatus constructs introduced in Python 3.11

* add changelog frag

(cherry picked from commit f9e583dae2)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-12 20:59:26 +01:00
patchback[bot]
7f8bc6f99d [PR #11541/4cd91ba4 backport][stable-12] Fix templating bug in iptables_state tests (#11572)
Fix templating bug in iptables_state tests (#11541)

* Fix templating bug in iptables_state tests.

* Try to install older packages on RHEL.

(cherry picked from commit 4cd91ba4d4)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-11 22:15:52 +01:00
patchback[bot]
b5846a3d05 [PR #11567/9b72d954 backport][stable-12] Add missing __future__ imports (#11569)
Add missing __future__ imports (#11567)

Add missing __future__ imports.

(cherry picked from commit 9b72d95452)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-11 07:10:34 +01:00
patchback[bot]
25c475a7ef [PR #11561/7436c0c9 backport][stable-12] replace literal HTTP codes with http.HTTPStatus (#11568)
replace literal HTTP codes with `http.HTTPStatus` (#11561)

* replace literal HTTP codes with http.HTTPStatus

* add changelog frag

(cherry picked from commit 7436c0c9ba)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-10 22:14:27 +01:00
patchback[bot]
b3782a76e0 [PR #11551/1554f23b backport][stable-12] nmcli: fix idempotency issue with macvlan (#11566)
nmcli: fix idempotency issue with macvlan (#11551)

* nmcli: fix idempotency issue with macvlan

The nmcli module is not idempotent for macvlan interfaces.

Ansible running in diff mode for a case where the interface in question
already exists:

```
TASK [nm_macvlan : Check macvlan connection] *********************************************************************************
--- before
+++ after
@@ -11,5 +11,5 @@
     "ipv6.method": "disabled",
     "macvlan.mode": "2",
     "macvlan.parent": "eth0",
-    "macvlan.tap": "no"
+    "macvlan.tap": "False"
 }
```
The problem is that `macvlan.tap` isn't treated as boolean option. Fix it.

* Update changelogs/fragments/11551-fix-nmcli-idempotency-for-macvlan.yml



---------


(cherry picked from commit 1554f23bfb)

Co-authored-by: Martin Wilck <mwilck@suse.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-10 22:00:44 +01:00
patchback[bot]
fc5de1a194 [PR #11548/2f33ff10 backport][stable-12] keycloak_authentication: fix TypeError when flow has no authenticationExecutions (#11565)
keycloak_authentication: fix TypeError when flow has no authenticationExecutions (#11548)

* TIAAS-12174: fix(keycloak_authentication): handle None authenticationExecutions

When a flow is defined without authenticationExecutions, module.params.get()
returns None but the key still exists in the config dict. The 'in' check
passes but iterating over None raises TypeError.

Guard the iteration with an explicit None check.

* keycloak_authentication: add changelog fragment for NoneType fix

* keycloak_authentication: update changelog fragment with PR link

* Update plugins/modules/keycloak_authentication.py



* Changelog polishing

---------



(cherry picked from commit 2f33ff1041)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
Co-authored-by: Ivan Kokalovic <ivan.kokalovic@example.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-03-10 06:57:51 +01:00
patchback[bot]
80184b6fd4 [PR #11562/93112d23 backport][stable-12] monit: remove unstable tag from integration tests (#11563)
monit: remove unstable tag from integration tests (#11562)

(cherry picked from commit 93112d23e5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-10 06:57:40 +01:00
patchback[bot]
be7dc5f37d [PR #11555/71f8c15d backport][stable-12] Allow setting of independent custom domain for incus inventory (#11560)
Allow setting of independent custom domain for incus inventory (#11555)

Allowing the domain suffix to be appended independent of the `host_fqdn`
setting enables the inventory plugin to construct proper FQDNs if a
network has the `dns.domain` property set. Otherwise you would always
end up with something like `host01.project.local.example.net` despite
`host01.example.net` being the expected result.

(cherry picked from commit 71f8c15d2e)

Co-authored-by: Roland Sommer <rol@ndsommer.de>
2026-03-07 19:12:30 +01:00
patchback[bot]
fc7bcccc9d [PR #11552/aaef821f backport][stable-12] Update links to iocage. Current iocage documentation is at freebsd.gi… (#11554)
Update links to iocage. Current iocage documentation is at freebsd.gi… (#11552)

Update links to iocage. Current iocage documentation is at freebsd.github.io/iocage/

(cherry picked from commit aaef821f60)

Co-authored-by: Vladimir Botka <vbotka@gmail.com>
2026-03-06 05:53:33 +00:00
patchback[bot]
5cb4632c15 [PR #11540/137f5444 backport][stable-12] aix_*: deprecation (#11550)
aix_*: deprecation (#11540)

* aix_*: deprecation

* add changelog frag

* update chglog

* adjustments from review

* typo

* wordsmithing from review

(cherry picked from commit 137f5444e3)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-03-04 21:59:38 +01:00
patchback[bot]
eae5987be1 [PR #11544/9b9d8eac backport][stable-12] Update tests to pass on macOS arm64 (#11545)
Update tests to pass on macOS arm64 (#11544)

(cherry picked from commit 9b9d8eac09)

Co-authored-by: Matt Clay <matt@mystile.com>
2026-02-27 19:29:47 +01:00
patchback[bot]
d45044790a [PR #11538/8929caec backport][stable-12] Fix description error in CONTRIBUTING.md (#11539)
Fix description error in CONTRIBUTING.md (#11538)

Fix text error in CONTRIBUTING.md.

Updated instructions for running format tests.

(cherry picked from commit 8929caece6)

Co-authored-by: IamLunchbox <56757745+IamLunchbox@users.noreply.github.com>
2026-02-25 06:53:14 +01:00
Felix Fontein
434f7ce55b The next expected release will be 12.5.0. 2026-02-23 18:38:55 +01:00
Felix Fontein
f88b8c85d7 Release 12.4.0. 2026-02-23 17:50:05 +01:00
patchback[bot]
6385fbe038 [PR #11534/e118b23b backport][stable-12] Simplify and extend from_ini tests (#11535)
Simplify and extend from_ini tests (#11534)

Simplify and extend from_ini tests.

(cherry picked from commit e118b23ba0)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-23 06:30:35 +01:00
patchback[bot]
4b6cd41512 [PR #11462/ce7cb4e9 backport][stable-12] New module icinga2_downtime (#11532)
New module icinga2_downtime (#11462)

* feat: Icinga 2 downtime module added allowing to schedule and remove downtimes through its REST API.



* ensure compatibility with ModuleTestCase

feat: errors raised from MH now contain the changed flag
ref: move module exit out of the decorated run method



* revised module

ref: module refactored using StateModuleHelper now
ref: suggested changes by reviewer added



* revert change regarding changed flag in MH



* refactoring and set changed flag explicitly on error



* Check whether there was a state change on module failure removed.



* ref: test cases migrated to the new feature that allows passing through exceptions



* Update plugins/module_utils/icinga2.py



* Update plugins/module_utils/icinga2.py



* Update plugins/modules/icinga2_downtime.py



* ref: make module helper private



* fix: ensure that all non-null values are added to the request otherwise a `false` value is dropped



* ref: module description extended with the note that check mode is not supported



* Update plugins/modules/icinga2_downtime.py



* fix: documentation updated



* ref: documentation updated
ref: doc fragment added



* Update plugins/doc_fragments/icinga2_api.py



* ref: doc fragment renamed to `_icinga2_api.py`



* ref: maintainer to doc fragment in BOTMETA.yml added



* Update plugins/modules/icinga2_downtime.py



* Update plugins/modules/icinga2_downtime.py



* Update plugins/modules/icinga2_downtime.py



---------





(cherry picked from commit ce7cb4e914)

Signed-off-by: Fiehe Christoph  <c.fiehe@eurodata.de>
Co-authored-by: Christoph Fiehe <cfiehe@users.noreply.github.com>
Co-authored-by: Fiehe Christoph <c.fiehe@eurodata.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-02-23 06:17:51 +01:00
patchback[bot]
8c429ac69d [PR #11485/cb91ff42 backport][stable-12] Fix: avoid deprecated callback. (#11531)
Fix: avoid deprecated callback. (#11485)

* Fix: avoid deprecated callback.

* addition of changelog

* Improve changelog fragment.

---------



(cherry picked from commit cb91ff424f)

Co-authored-by: Tom Uijldert <155556120+TomUijldert@users.noreply.github.com>
Co-authored-by: tom uijldert <tom.uijldert@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-23 06:17:28 +01:00
patchback[bot]
30eb35cb95 [PR #11512/aec0e61b backport][stable-12] adds parameter delimiters to from_ini filter (#11533)
adds parameter delimiters to from_ini filter (#11512)

* adds parameter delimiters to from_ini filter

fixes issue #11506

* adds changelog fragment

* fixes pylint dangerous-default-value / W0102

* does not assume default delimiters

let that be decided in the super class

* Update plugins/filter/from_ini.py

verbose description



* Update changelogs/fragments/11512-from_ini-delimiters.yaml



* adds input validation

* adss check for delimiters not None

* adds missing import

* removes the negation

* adds suggestions from russoz

* adds ruff format suggestion

---------


(cherry picked from commit aec0e61ba1)

Co-authored-by: Robert Sander <github@gurubert.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-23 06:17:00 +01:00
Felix Fontein
33f3e7172b Prepare 12.4.0. 2026-02-22 16:39:25 +01:00
patchback[bot]
c2751dd6f5 [PR #11513/0e184d24 backport][stable-12] add support for localizationTexts in keycloak_realm.py (#11530)
add support for localizationTexts in keycloak_realm.py (#11513)

* add support for localizationTexts in keycloak_realm.py

* add changelog fragment

* change version added to next minor release

* Update changelogs/fragments/11513-keycloak-realm-localizationTexts-support.yml



* Update plugins/modules/keycloak_realm.py



---------


(cherry picked from commit 0e184d24cf)

Co-authored-by: nwintering <33374766+nwintering@users.noreply.github.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-02-21 23:07:08 +01:00
patchback[bot]
d3dd685ad4 [PR #11515/7cd75945 backport][stable-12] #11502 Fix mapping of config of keycloak_user_federation (#11529)
#11502 Fix mapping of config of keycloak_user_federation (#11515)

* #11502 Fix mapping of config

Fix mapping of config

Fix diff for mappers

* Fix formatting with nox

* Update changelogs/fragments/11502-keycloak-config-mapper.yaml



* Remove duplicate comment
https://github.com/ansible-collections/community.general/pull/11515#discussion_r2821444756

---------


(cherry picked from commit 7cd75945b2)

Co-authored-by: mixman68 <greg.djg13@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-21 12:11:19 +01:00
patchback[bot]
696b6e737a [PR #11523/1ae058db backport][stable-12] reduce collection build time with build_ignore (#11528)
reduce collection build time with build_ignore (#11523)

* reduce build time with build_ignore



* just ignore .nox



---------


(cherry picked from commit 1ae058db63)

Signed-off-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>
Co-authored-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>
2026-02-21 11:43:25 +01:00
patchback[bot]
45d16053ee [PR #10306/38f93c80 backport][stable-12] New Callback plugin: loganalytics_ingestion adding Azure Log Analytics Ingestion (#11527)
New Callback plugin: `loganalytics_ingestion` adding Azure Log Analytics Ingestion (#10306)

* Add Azure Log Analytics Ingestion API plugin

The Ingestion API allows sending data to a Log Analytics workspace in
Azure Monitor.

* Fix LogAnalytics Ingestion shebang

* Fix Log Analytics Ingestion pep8 tests

* Fix Log Analytics Ingestion pylint tests

* Fix Log Analytics Ingestion import tests

* Fix Log Analytics Ingestion pylint test

* Add Log Analytics Ingestion auth timeout

Previous behavior was to use the 'request' module's default timeout;
this makes auth timeout value consistent with the task submission
timeout value.

* Display Log Analytics Ingestion event data as JSON

Previous behavior was to display the data as a Python dictionary.
The new behavior makes it easier to generate a sample JSON file in order
to import into Azure when creating the table.

* Add Azure Log Analytics Ingestion timeout param

This parameter controls how long the plugin will wait for an HTTP response
from the Azure Log Analytics API before considering the request a failure.
Previous behavior was hardcoded to 2 seconds.

* Fix Azure Log Ingestion unit test

The class instantiation was missing an additional argument that was added
in a previous patch; add it.  Converting to JSON also caused the Mock
TaskResult object to throw a serialization error; override the function
for JSON conversion to just return bogus data instead.

* Fix loganalytics_ingestion linter errors

* Fix LogAnalytics Ingestion env vars

Prefix the LogAnalytics Ingestion plugin's environment variable names
with 'ANSIBLE_' in order to align with plugin best practices.

* Remove LogAnalytics 'requests' dep from docs

The LogAnalytics callback plugin does not actually require 'requests',
so remove it from the documented dependencies.

* Refactor LogAnalytics Ingestion to use URL utils

This replaces the previous behavior of depending on the external
'requests' library.

* Simplify LogAnalytics Ingestion token valid check



* Remove LogAnalytics Ingestion extra arg validation

Argument validation should be handled by ansible-core, so remove the
extra argument validation in the plugin itself.

* Update LogAnalytics Ingestion version added

* Remove LogAnalytics Ingestion coding marker

The marker is no longer needed as Python2 is no longer supported.

* Fix some LogAnalytics Ingestion grammar errors

* Refactor LogAnalytics Ingestion plugin messages

Consistently use "plugin" instead of module, and refer to the module by
its FQCN instead of its prose name.

* Remove LogAnalytics Ingestion extra logic

A few unused vars were being set; stop setting them.

* Fix LogAnalytics Ingestion nox sanity tests

* Fix LogAnalytics Ingestion unit tests

The refactor to move away from the 'requests' dependency to use
module_utils broke the plugin's unit tests; re-write the plugin's unit
tests for module_utils.

* Add nox formatting to LogAnalytics Ingestion

* Fix Log Analytics Ingestion urllib import

Remove the compatibility import via 'six' for 'urllib' since Python 2
support is no longer supported.

* Bump LogAnalytics Ingestion plugin version added

* Remove LogAnalytics Ingestion required: false docs

Required being false is the default, so no need to explicitly add it.

* Simplify LogAnalytics Ingestion role name logic

* Clean LogAnalytics Ingestion redundant comments

* Clean LogAnalytics Ingestion unit test code

Rename all Mock objects to use snake_case and consistently use '_mock'
as a suffix instead of sometimes using it as a prefix and sometimes
using it as a suffix.

* Refactor LogAnalytics Ingestion unit tests

Move all of the tests outside of the 'setUp' method.

* Refactor LogAnalytics Ingestion test

Add a test to validate that part of the contents sent match what was
supposed to be sent.

* Refactor LogAnalytics Ingestion test

Make the names consistent again.

* Add LogAnalytics Ingestion sample data docs

* Apply suggestions from code review



---------


(cherry picked from commit 38f93c80f1)

Co-authored-by: wtcline-intc <wade.cline@intel.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-21 11:43:16 +01:00
patchback[bot]
1d4fd21702 [PR #11471/34938ca1 backport][stable-12] keycloak_user_rolemapping: handle None response for client role lookup (#11522)
keycloak_user_rolemapping: handle None response for client role lookup (#11471)

* fix(keycloak_user_rolemapping): handle None response for client role lookup

When adding a client role to a user who has no existing roles for that
client, get_client_user_rolemapping_by_id() returns None. The existing
code indexed directly into the result causing a TypeError. Add the same
None check that already existed for realm roles since PR #11256.

Fixes #10960

* fix(tests): use dict format for task vars in keycloak_user_rolemapping tests

Task-level vars requires a YAML mapping, not a sequence. The leading
dash (- roles:) produced a list instead of a dict, which ansible-core
2.20 rejects with "Vars in a Task must be specified as a dictionary".

* Update changelogs/fragments/keycloak-user-rolemapping-client-none-check.yml



---------


(cherry picked from commit 34938ca1ef)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-18 20:50:15 +01:00
patchback[bot]
bfcdeeab91 [PR #11468/80d21f2a backport][stable-12] keycloak_realm_key: add full support for all Keycloak key providers (#11519)
keycloak_realm_key: add full support for all Keycloak key providers (#11468)

* feat(keycloak_realm_key): add support for auto-generated key providers

Add support for Keycloak's auto-generated key providers where Keycloak
manages the key material automatically:

- rsa-generated: Auto-generates RSA signing keys
- hmac-generated: Auto-generates HMAC signing keys
- aes-generated: Auto-generates AES encryption keys
- ecdsa-generated: Auto-generates ECDSA signing keys

New algorithms:
- HMAC: HS256, HS384, HS512
- ECDSA: ES256, ES384, ES512
- AES: AES (no algorithm parameter needed)

New config options:
- secret_size: For HMAC/AES providers (key size in bytes)
- key_size: For RSA-generated provider (key size in bits)
- elliptic_curve: For ECDSA-generated provider (P-256, P-384, P-521)

Changes:
- Make private_key/certificate optional (only required for rsa/rsa-enc)
- Add provider-algorithm validation with clear error messages
- Fix KeyError when managing default realm keys (issue #11459)
- Maintain backward compatibility: RS256 default works for rsa/rsa-generated

Fixes: #11459

* fix: address sanity test failures

- Add 'default: RS256' to algorithm documentation to match spec
- Add no_log=True to secret_size parameter per sanity check

* feat(keycloak_realm_key): extend support for all Keycloak key providers

Add support for remaining auto-generated key providers:
- rsa-enc-generated (RSA encryption keys with RSA1_5, RSA-OAEP, RSA-OAEP-256)
- ecdh-generated (ECDH key exchange with ECDH_ES, ECDH_ES_A128KW/A192KW/A256KW)
- eddsa-generated (EdDSA signing with Ed25519, Ed448 curves)

Changes:
- Add provider-specific elliptic curve config key mapping
  (ecdsaEllipticCurveKey, ecdhEllipticCurveKey, eddsaEllipticCurveKey)
- Add PROVIDERS_WITHOUT_ALGORITHM constant for providers that don't need algorithm
- Add elliptic curve validation per provider type
- Update documentation with all supported algorithms and examples
- Add comprehensive integration tests for all new providers

This completes full coverage of all Keycloak key provider types.

* style: apply ruff formatting

* feat(keycloak_realm_key): add java-keystore provider and update_password

Add support for java-keystore provider to import keys from Java
Keystore (JKS or PKCS12) files on the Keycloak server filesystem.

Add update_password parameter to control password handling for
java-keystore provider:
- always (default): Always send passwords to Keycloak
- on_create: Only send passwords when creating, preserve existing
  passwords when updating (enables idempotent playbooks)

The on_create mode sends the masked value ("**********") that Keycloak
recognizes as "preserve existing password", matching the behavior when
re-importing an exported realm.

Replace password_checksum with update_password - the checksum approach
was complex and error-prone. The update_password parameter is simpler
and follows the pattern used by ansible.builtin.user module.

Also adds key_info return value containing kid, certificate fingerprint,
status, and expiration for java-keystore keys.

* address PR review feedback

- Remove no_log=True from secret_size (just an int, not sensitive)
- Add version_added: 12.4.0 to new parameters and return values
- Remove "Added in community.general 12.4.0" from description text
- Consolidate changelog entries into 4 focused entries
- Remove bugfix from changelog (now in separate PR #11470)

* address review feedback from russoz and felixfontein

- remove docstrings from module-local helpers
- remove line-by-line comments and unnecessary null guard
- use specific exceptions instead of bare except Exception
- use module.params["key"] instead of .get("key")
- consolidate changelog into single entry
- avoid "complete set" claim, reference Keycloak 26 instead

* address round 2 review feedback

- Extract remove_sensitive_config_keys() helper (DRY refactor)
- Simplify RS256 validation to single code path
- Add TypeError to inner except in compute_certificate_fingerprint()
- Remove redundant comments (L812, L1031)
- Switch .get() to direct dict access for module.params

(cherry picked from commit 80d21f2a0d)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
2026-02-18 18:36:48 +01:00
patchback[bot]
5dcb3b8f59 [PR #10841/986118c0 backport][stable-12] keycloak_realm_localization: new module - realm localization control (#11517)
keycloak_realm_localization: new module - realm localization control (#10841)

* add support for management of keycloak localizations

* unit test for keycloak localization support

* keycloak_realm_localization botmeta record

* rev: improvements after code review

(cherry picked from commit 986118c0af)

Co-authored-by: Jakub Danek <danekja@users.noreply.github.com>
2026-02-18 07:44:44 +01:00
patchback[bot]
42c20a754b [PR #11488/5e0fd120 backport][stable-12] ModuleHelper: ensure compatibility with ModuleTestCase (#11518)
ModuleHelper: ensure compatibility with `ModuleTestCase` (#11488)

* ModuleHelper: ensure compatibility with `ModuleTestCase`.

This change allows to configure the `module_fails_on_exception` decorator by passing a tuple of exception types that should not be handled by the decorator itself. In the context of `ModuleTestCase`, use `(AnsibleExitJson, AnsibleFailJson)` to let them pass through the decorator without modification.



* Another approach allowing user-defined exception types to pass through the decorator. When the decorator should have no arguments at all, we must hard code the name of the attribute that is looked up on self.



* Approach that removes decorator parametrization and relies on an object/class variable named `unhandled_exceptions`.



* context manager implemented that allows to pass through some exception types



* Update changelogs/fragments/11488-mh-ensure-compatibiliy-with-module-tests.yml



* Exception placeholder added



---------




(cherry picked from commit 5e0fd1201c)

Signed-off-by: Fiehe Christoph  <c.fiehe@eurodata.de>
Co-authored-by: Christoph Fiehe <cfiehe@users.noreply.github.com>
Co-authored-by: Fiehe Christoph <c.fiehe@eurodata.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-18 07:26:47 +01:00
patchback[bot]
75b6b4d792 [PR #11461/4bbedfd7 backport][stable-12] nsupdate: fix missing keyring initialization without TSIG auth (#11516)
nsupdate: fix missing keyring initialization without TSIG auth (#11461)

* nsupdate: fix missing keyring initialization without TSIG auth

* Update changelogs/fragments/fix-nsupdate-keyring.yml



---------


(cherry picked from commit 4bbedfd7df)

Co-authored-by: Pascal <pascal.guinet@free.fr>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-18 06:57:33 +01:00
patchback[bot]
a0c4308bed [PR #11503/85a0deee backport][stable-12] keycloak module utils: group search optimization (#11511)
keycloak module utils: group search optimization (#11503)

* Updated get_group_by_name with a query based lookup for improved speed

* Add changelog fragment for keycloak group search optimization

* Address review feedback: update changelog text and reformat code with ruff

* improved changelog fragment

* Update changelogs/fragments/11503-keycloak-group-search-optimization.yml



---------


(cherry picked from commit 85a0deeeba)

Co-authored-by: Andreas Wegmann <andreas.we9mann@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-14 21:14:52 +01:00
patchback[bot]
6437fe15c8 [PR #11486/c05c3133 backport][stable-12] seport: Add support for dccp and sctp protocols (#11509)
seport: Add support for dccp and sctp protocols (#11486)

Support for dccp and sctp protocols were added to SELinux userspace
python libraries in 3.0 version release in November 2019.

(cherry picked from commit c05c31334b)

Co-authored-by: Petr Lautrbach <lautrbach@redhat.com>
2026-02-14 21:14:44 +01:00
patchback[bot]
baddfa5a80 [PR #11501/ed7ccbe3 backport][stable-12] maven_artifact: resolve SNAPSHOT to latest using snapshot metadata block (#11508)
maven_artifact: resolve SNAPSHOT to latest using snapshot metadata block (#11501)

* fix(maven_artifact): resolve SNAPSHOT to latest using snapshot metadata block

Prefer the <snapshot> block (timestamp + buildNumber) from maven-metadata.xml
which always points to the latest build, instead of scanning <snapshotVersions>
and returning on the first match. Repositories like GitHub Packages keep all
historical entries in <snapshotVersions> (oldest first), causing the module to
resolve to the oldest snapshot instead of the latest.

Fixes #5117
Fixes #11489

* fix(maven_artifact): address review feedback

- Check both timestamp and buildNumber before using snapshot block,
  preventing IndexError when buildNumber is missing
- Remove unreliable snapshotVersions scanning fallback; use literal
  -SNAPSHOT version for non-unique snapshot repos instead
- Add tests for incomplete snapshot block and non-SNAPSHOT versions

* fix(maven_artifact): restore snapshotVersions scanning with last-match

Restore <snapshotVersions> scanning as primary resolution (needed for
per-extension accuracy per MNG-5459), but collect the last match instead
of returning on the first. Fall back to <snapshot> block when no
<snapshotVersions> match is found, then to literal -SNAPSHOT version.

* docs: update changelog fragment to match final implementation

* fix(maven_artifact): use updated timestamp for snapshot resolution

Use the <updated> attribute to select the newest snapshotVersion entry
instead of relying on list order. This works independently of how the
repository manager sorts entries in maven-metadata.xml.

Also fix test docstring and update changelog fragment per reviewer
feedback.

* test(maven_artifact): shuffle entries to verify updated timestamp sorting

Reorder snapshotVersion entries so the newest JAR is in the middle,
not at the end. This ensures the test actually validates that resolution
uses the <updated> timestamp rather than relying on list position.

(cherry picked from commit ed7ccbe3d4)

Co-authored-by: Adam R. <ariwk@protonmail.com>
2026-02-14 21:14:36 +01:00
patchback[bot]
b7d1483a08 [PR #11500/c9313af9 backport][stable-12] keycloak_identity_provider: add claims example for oidc-advanced-group-idp-mapper (#11507)
keycloak_identity_provider: add claims example for oidc-advanced-group-idp-mapper (#11500)

Add claims example for oidc-advanced-group-idp-mapper

For me it wasn't clear how to create claims using oidc-advanced-group-idp-mapper, perhaps other people can benefit from the following example.

(cherry picked from commit c9313af971)

Co-authored-by: David Filipe <68902816+daveopz@users.noreply.github.com>
2026-02-14 21:14:17 +01:00
patchback[bot]
b87121e1eb [PR #11504/8729f563 backport][stable-12] Update check_availability_service to return data instead of boolean (#11510)
Update check_availability_service to return data instead of boolean (#11504)

* Update check_availability_service to return data instead of boolean

* Add changelog fragment

(cherry picked from commit 8729f563b3)

Co-authored-by: Scott Seekamp <13857911+sseekamp@users.noreply.github.com>
2026-02-14 21:14:07 +01:00
patchback[bot]
cb17703c36 [PR #11495/88adca3f backport][stable-12] python_requirements_info: use importlib.metadata when available (#11496)
python_requirements_info: use importlib.metadata when available (#11495)

Use importlib.metadata when available.

(cherry picked from commit 88adca3fb4)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-11 07:12:04 +01:00
patchback[bot]
05d457dca7 [PR #11484/63ddca7f backport][stable-12] supervisorctl: remove unstable tag from integration tests (#11494)
supervisorctl: remove unstable tag from integration tests (#11484)

(cherry picked from commit 63ddca7f21)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-02-10 21:51:31 +01:00
patchback[bot]
7fce59fbc6 [PR #11479/476f2bf6 backport][stable-12] Integration tests: replace ansible_xxx with ansible_facts.xxx (#11480)
Integration tests: replace ansible_xxx with ansible_facts.xxx (#11479)

Replace ansible_xxx with ansible_facts.xxx.

(cherry picked from commit 476f2bf641)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-07 18:43:49 +01:00
patchback[bot]
de6967d3ff [PR #11473/df6d6269 backport][stable-12] keycloak_client: add valid_post_logout_redirect_uris and backchannel_logout_url (#11475)
keycloak_client: add valid_post_logout_redirect_uris and backchannel_logout_url (#11473)

* feat(keycloak_client): add valid_post_logout_redirect_uris and backchannel_logout_url

Add two new convenience parameters that map to client attributes:

- valid_post_logout_redirect_uris: sets post.logout.redirect.uris
  attribute (list items joined with ##)
- backchannel_logout_url: sets backchannel.logout.url attribute

These fields are not top-level in the Keycloak REST API but are stored
as client attributes. The new parameters provide a user-friendly
interface without requiring users to know the internal attribute names
and ##-separator format.

Fixes #6812, fixes #4892

* consolidate changelog and add PR link per review feedback

(cherry picked from commit df6d6269a6)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
2026-02-07 16:34:46 +01:00
patchback[bot]
bbb9b03b5e [PR #11464/8b0ce3e2 backport][stable-12] community.general.copr: clarify includepkgs/excludepkgs (#11476)
community.general.copr: clarify includepkgs/excludepkgs (#11464)

At first glance, includepkgs seems to be something that would install
the package name from the given copr repo.  This isn't helped by the
example that says "Install caddy" which very much looks like it is
installing the package from the repo.  Not only did I, a human,
hallucinate this behaviour, so did a large search engine's AI
responses to related queries.

In fact these are labels to vary what packages DNF sees.  Clarify this
by using wording and examples closer to the upstream documentation [1]

[1] https://dnf.readthedocs.io/en/latest/conf_ref.html

(cherry picked from commit 8b0ce3e28f)

Co-authored-by: Ian Wienand <ian@wienand.org>
2026-02-07 16:34:38 +01:00
patchback[bot]
a0d6487f6d [PR #11455/af4dbafe backport][stable-12] keycloak_client: fix diff for keycloak client auth flow overrides (#11477)
keycloak_client: fix diff for keycloak client auth flow overrides (#11455)

* 11430: fix diff for keycloak client auth flow overrides

* 11430: add changelog fragment

* 11430: move util function merge_settings_without_absent_nulls to the util functions file _keycloak_utils

* 11443: code cleanup

---------


(cherry picked from commit af4dbafe86)

Co-authored-by: thomasbargetz <thomas.bargetz@gmail.com>
Co-authored-by: Thomas Bargetz <thomas.bargetz@rise-world.com>
2026-02-07 16:34:29 +01:00
patchback[bot]
88bfb6dda3 [PR #11470/10681731 backport][stable-12] keycloak_realm_key: handle missing config fields for default keys (#11478)
keycloak_realm_key: handle missing config fields for default keys (#11470)

* fix(keycloak_realm_key): handle missing config fields for default keys

Keycloak API may not return 'active', 'enabled', or 'algorithm' fields
in the config response for default/auto-generated realm keys. This caused
a KeyError when the module tried to compare these fields during state
detection.

Use .get() with the expected value as default to handle missing fields
gracefully, treating them as unchanged if not present in the API response.

Fixes: #11459

* add PR link to changelog entry per review feedback

(cherry picked from commit 106817316d)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
2026-02-07 16:34:22 +01:00
patchback[bot]
d637db7623 [PR #11472/c41de53d backport][stable-12] keycloak: URL-encode query parameters for usernames with special characters (#11474)
keycloak: URL-encode query parameters for usernames with special characters (#11472)

* fix(keycloak): URL-encode query params for usernames with special chars

get_user_by_username() concatenates the username directly into the URL
query string. When the username contains a +, it is interpreted as a
space by the server, returning no match and causing a TypeError.

Use urllib.parse.quote() (already imported) for the username parameter.
Also replace three fragile .replace(' ', '%20') calls in the authz
search methods with proper quote() calls.

Fixes #10305

* Update changelogs/fragments/keycloak-url-encode-query-params.yml



---------


(cherry picked from commit c41de53dbb)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-02-06 20:36:02 +01:00
patchback[bot]
2198588afa [PR #11454/b236772e backport][stable-12] keycloak_client: remove id's as change from diff for protocol mappers (#11469)
keycloak_client: remove id's as change from diff for protocol mappers (#11454)

* 11453 remove id's as change from diff for protocol mappers

* Update changelogs/fragments/11453-keycloak-client-protocol-mapper-ids.yml



---------


(cherry picked from commit b236772e57)

Co-authored-by: Simon Moosbrugger <707958+simonmoosbrugger@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-05 17:29:29 +01:00
Felix Fontein
9d6db6002c Add latest commit to .git-blame-ignore-revs.
(cherry picked from commit bce87a2a77)
2026-02-04 09:04:37 +01:00
patchback[bot]
dd9c86dfc0 [PR #11465/24098cd6 backport][stable-12] Reformat code (#11466)
Reformat code (#11465)

Reformat code.

(cherry picked from commit 24098cd638)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-04 09:04:00 +01:00
patchback[bot]
a266ba1d6e [PR #11457/95b24ac3 backport][stable-12] jboss: deprecation (#11458)
jboss: deprecation (#11457)

(cherry picked from commit 95b24ac3fe)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-01-31 10:03:36 +01:00
Felix Fontein
4167d8ebeb The next expected release will be 12.4.0. 2026-01-26 19:00:03 +01:00
Felix Fontein
e9064bbf97 Release 12.3.0. 2026-01-26 18:23:14 +01:00
patchback[bot]
79a5e6745b [PR #11444/ccf61224 backport][stable-12] keycloak_client: 11443: Fix false change detection for null client attributes (#11451)
keycloak_client: 11443: Fix false change detection for null client attributes (#11444)

* 11443: fix diff for keycloak_client module for non existing client attributes

* 11443: code cleanup

* 11443: add changelog fragment

* Adjust changelog fragment.

---------



(cherry picked from commit ccf61224f1)

Co-authored-by: thomasbargetz <thomas.bargetz@gmail.com>
Co-authored-by: Thomas Bargetz <thomas.bargetz@rise-world.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-26 17:43:17 +01:00
patchback[bot]
b5d57a35d6 [PR #11442/72220a2b backport][stable-12] fix gem module compatibility with ruby-4-rubygems (#11452)
fix gem module compatibility with ruby-4-rubygems (#11442)

* fix gem module compatibility with ruby-4-rubygems

rubygem's `query` command has recently been removed, see ruby/rubygems#9083.
address this by using the `list` command instead.

resolves #11397

* add changelog

* Adjust changelog fragment.

---------


(cherry picked from commit 72220a2b15)

Co-authored-by: glaszig <mail+github@glasz.org>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-26 17:43:07 +01:00
patchback[bot]
44dfe9e1ab [PR #11440/53e1e86b backport][stable-12] Logstash plugin version fix (#11450)
Logstash plugin version fix (#11440)

* logstash_plugin: fix argument order when using version parameter

* logstash_plugin: add integration tests

* logstash_plugin: add changelog fragment

(cherry picked from commit 53e1e86bcc)

Co-authored-by: Nicolas Boutet <amd3002@gmail.com>
2026-01-26 06:29:35 +01:00
patchback[bot]
4d05149b6c [PR #11368/aada8647 backport][stable-12] Adding 'project' parameter to Scaleway IP module. (#11447)
Adding 'project' parameter to Scaleway IP module. (#11368)

* Adding 'project' parameter to Scaleway IP module.

* Adding changelog fragment.

* Incrementing version.



* Updating docs to show both org and project ID options.

* Moving deprecated example to the end.

---------


(cherry picked from commit aada864718)

Co-authored-by: Greg Harvey <greg.harvey@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-25 21:05:27 +00:00
patchback[bot]
31bab91c31 [PR #11366/c0df3664 backport][stable-12] Adding 'project' parameter support for the Scaleway SG module. (#11448)
Adding 'project' parameter support for the Scaleway SG module. (#11366)

* Adding 'project' parameter support for the Scaleway SG module.

* Adding changelog fragment.

* Fixing documentation, organization is deprecated (although still available).

* Updating docs to show both org and project ID options.

* Incrementing version.



* Moving deprecated example to the end.

---------


(cherry picked from commit c0df366471)

Co-authored-by: Greg Harvey <greg.harvey@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-25 21:05:10 +00:00
patchback[bot]
ccdf82f163 [PR #11445/f9334656 backport][stable-12] Cleanup (#11446)
Cleanup (#11445)

* Correctly position BOTMETA entry.

* Standardize to 'import typing as t'.

* Remove platform attribute.

(cherry picked from commit f933465658)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-25 18:56:23 +01:00
patchback[bot]
1670f8693a [PR #11322/7a18af80 backport][stable-12] Handle @Redfish.Settings when setting ComputerSystem boot attributes (#11441)
Handle @Redfish.Settings  when setting ComputerSystem boot attributes (#11322)

* set_boot_override function now uses Redfish Settings URI if available in ComputerSystem resource

* Follows code formatting rules

* Add changelogs fragments file

* Update changelogs/fragments/11322-handle-redfish-settings-in-setbootoverride.yml



* Explicit rewriting as a workaround to keep the "good" path clean.

* Adjust changelog fragment.

---------




(cherry picked from commit 7a18af80ce)

Co-authored-by: Pierre-yves Fontaniere <pyfontan@cc.in2p3.fr>
Co-authored-by: Pierre-yves FONTANIERE <pyf@cc.in2p3.fr>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-23 06:44:37 +01:00
patchback[bot]
d58777ff5e [PR #11423/864695f8 backport][stable-12] Add to_toml filter (#11438)
Add `to_toml` filter (#11423)

* Add to_toml filter

This is based heavily on the to_yaml filter, but
with a pared-down feature set.

* Protect import

* Don't quote datetime as a string

* Use Ansible error types

* Import correct error types

* Don't use AnsibleTypeError

It doesn't seem to be available on older Ansible
core versions.

* Fix antsibull-nox errors

* Install dependencies for to_toml integration test



* Reduce author list to main contributor



* Update version added for to_toml



* Use AnsibleError for missing import



* Use AnsibleFilterError for runtime type check



* Move common code to plugin_utils/_tags.py

* Mark module util as private



* Update BOTMETA for to_toml



* Fix typo

* Correct version number



* Use to_text for to_toml dict key conversions



* Add tomlkit requirement to docs



* Add missing import

* Add aliases for for to_toml integration test

---------


(cherry picked from commit 864695f898)

Co-authored-by: Matt Williams <matt@milliams.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-22 07:40:51 +01:00
patchback[bot]
68f2433577 [PR #11425/9fcd9338 backport][stable-12] nsupdate: add server FQDN and GSS-TSIG support (#11439)
nsupdate: add server FQDN and GSS-TSIG support (#11425)

* nsupdate: support server FQDN

Right now, the server has to be specified as an IPv4/IPv6 address. This
adds support for specifing the server as a FQDN as well.

* nsupdate: support GSS-TSIG/Kerberos

Add support for GSS-TSIG (Kerberos) keys to nsupdate. This makes life
easier when working with Windows DNS servers or Bind in a Kerberos
environment.

Inspiration taken from here:
https://github.com/rthalley/dnspython/pull/530#issuecomment-1363265732

Closes: #5730

* nsupdate: introduce query helper function

This simplifies the code by moving the protocol checks, etc, into a
single place.

* nsupdate: try all server IP addresses

Change resolve_server() to generate a list of IPv[46] addresses, then
try all of them in a round-robin fashion in query().

* nsupdate: some more cleanups

As suggested in the PR review.

* nsupdate: apply suggestions from code review



---------


(cherry picked from commit 9fcd9338b1)

Co-authored-by: David Härdeman <david@hardeman.nu>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-22 07:38:12 +01:00
Felix Fontein
0f6dfd1ebb Prepare 12.3.0. 2026-01-20 22:42:03 +01:00
patchback[bot]
43f0152969 [PR #11421/9611dc25 backport][stable-12] time-command.py: make sure seconds is an int (#11436)
time-command.py: make sure seconds is an int (#11421)

Make sure seconds is an int.

(cherry picked from commit 9611dc258a)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-20 22:41:12 +01:00
patchback[bot]
68bd8babf7 [PR #11114/a8378a4e backport][stable-12] nmcli idempotency connection check (#11437)
nmcli idempotency connection check (#11114)

* nmcli idempotency connection check

* Changelog fragment and ruff reformat

* Fix : change error handling

* Remove odd conditions

* Refactor nmcli: fix error handling and remove redundant logic

* Fix code format

* Fix error message to handle

(cherry picked from commit a8378a4eb0)

Co-authored-by: Seddik Alaoui Ismaili <32570331+saibug@users.noreply.github.com>
2026-01-20 22:21:14 +01:00
patchback[bot]
ca805badc0 [PR #11413/4b0aeede backport][stable-12] feat(nmcli): Add support for IPv6 routing rules (#11432)
feat(nmcli): Add support for IPv6 routing rules (#11413)

* feat(nmcli): Add support for IPv6 routing rules

Closes #7094



* Add changelog fragment



* Fixing doc



* Add issue link to changelog fragment



* Fix version



---------




(cherry picked from commit 4b0aeede69)

Signed-off-by: Rémy Jacquin <remy@remyj.fr>
Co-authored-by: Rémy Jacquin <1536771+remyj38@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-01-16 21:32:52 +01:00
patchback[bot]
5b571fd53f [PR #11308/4b67afc2 backport][stable-12] Add option for wsl_shell_type, protect wsl.exe arguments if SSH shell is Powershell (#11433)
Add option for wsl_shell_type, protect wsl.exe arguments if SSH shell is Powershell (#11308)

* feat(wsl): add option for wsl_shell_type, protect wsl arguments if SSH shell is Powershell

* docs(wsl): add changelog fragment

* docs(wsl): fix changelog fragment syntax, add issue link



* feat(wsl): improve new option documentation



* refactor(wsl): put integrasion test flag into a variable for convenience

* feat(wsl): rename option to wsl_remote_ssh_shell_type

* feat(wsl): escape "%" if shell is cmd, raise AnsibleError if powershell

* test(wsl): fix unit tests for wsl

- remove redundant check - moved to a separate function
- fix check for cmd escaping of "%"
- fix formatting / whitespace

* test(wsl): fix expected error message

* test(wsl): fix test - position of stop-parsing token changed



---------


(cherry picked from commit 4b67afc2b0)

Co-authored-by: fizmat <fizmat.r66@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-16 21:26:29 +01:00
patchback[bot]
68b2385efd [PR #11427/0a702167 backport][stable-12] Update ignore.txt (#11429)
Update ignore.txt (#11427)

Update ignore.txt.

(cherry picked from commit 0a70216763)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-15 22:17:43 +01:00
Felix Fontein
b5c965939f Make sure stable-12 CI runs in cron.
(cherry picked from commit 28b16eab66)
2026-01-11 00:43:09 +01:00
patchback[bot]
136c7debe3 [PR #11417/a689bb8e backport][stable-12] CI: Arch Linux switched to Python 3.14 (#11420)
CI: Arch Linux switched to Python 3.14 (#11417)

Arch Linux switched to Python 3.14.

(cherry picked from commit a689bb8e8d)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-11 00:42:54 +01:00
patchback[bot]
5d16a88298 [PR #11347/e790b950 backport][stable-12] incus connection: fix regex (#11415)
* incus connection: fix regex (#11347)

* incus connection: fix regex

* updates

* Apply suggestions from code review

* expand regexp capture

* add changelog frag

* Update plugins/connection/incus.py

* split arguments after command option

* Update plugins/connection/incus.py

* remove *() and split from the last command

* add tests, make small adjustments

* remove redundant strip()

* add more tests

* adjusted changelog fragment

(cherry picked from commit e790b95067)

* Order imports.

(cherry picked from commit 76d51db8d0)

---------

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-09 21:22:29 +01:00
Felix Fontein
02ed21b2a6 [stable-12] Configure sorting imports in CI and formatting (#11414)
Configure sorting imports in CI and formatting (#11410)

* Add reformat commit to .git-blame-ignore-revs.

* Make ruff also check the import order.

* Add ruff check --fix for imports to the nox formatting session.

(cherry picked from commit 91efa27cb9)
2026-01-09 19:41:54 +00:00
patchback[bot]
b769b0bc01 [PR #11400/236b9c0e backport][stable-12] Sort imports with ruff check --fix (#11409)
Sort imports with ruff check --fix (#11400)

Sort imports with ruff check --fix.

(cherry picked from commit 236b9c0e04)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-09 19:36:52 +01:00
patchback[bot]
ebaf2e71d5 [PR #11401/0e6ba072 backport][stable-12] Update CI pipelines (#11405)
Update CI pipelines (#11401)

Update CI pipelines:
- Fedora 42 -> 43 for devel
- RHEL 10.0 -> 10.1 for all ansible-core branches
- FreeBSD 13.5 -> 15.0 for devel
- Alpine 3.22 -> 3.23 for devel

(cherry picked from commit 0e6ba07261)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-08 12:28:46 +01:00
patchback[bot]
e3535de323 [PR #11396/c8356981 backport][stable-12] move imports from functions to the top of the file (#11399)
move imports from functions to the top of the file (#11396)

* move imports from functions to the top of the file

* add changelog frag

* Apply suggestions from code review



---------


(cherry picked from commit c8356981bb)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-07 21:27:49 +01:00
patchback[bot]
6169699b24 [PR #11388/defd1560 backport][stable-12] pmem: remove redundant use of regexp (#11398)
pmem: remove redundant use of regexp (#11388)

* pmem: remove redundant use of regexp

* add changelog frag

* add bugfixes extry

* Update plugins/modules/pmem.py



* Update plugins/modules/pmem.py



---------


(cherry picked from commit defd15609c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-07 20:50:56 +01:00
patchback[bot]
e714d15891 [PR #11390/996b7469 backport][stable-12] slackpkg: simplify function query_package() (#11395)
slackpkg: simplify function `query_package()` (#11390)

* slackpkg: simplify function query_package()

* add changelog frag

(cherry picked from commit 996b7469e5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-01-06 18:46:27 +01:00
patchback[bot]
dda90768f5 [PR #11391/b67c94fc backport][stable-12] fix ruff cases UP024,UP041 (#11394)
fix ruff cases UP024,UP041 (#11391)

* fix ruff cases UP024,UP041

* add changelog frag

(cherry picked from commit b67c94fc3f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-01-06 18:18:08 +01:00
patchback[bot]
cd548f779a [PR #11385/d1352702 backport][stable-12] CI: Let the Python formatters and linters apply to all files in the collection (#11386)
CI: Let the Python formatters and linters apply to all files in the collection (#11385)

Let the Python formatters and linters apply to all files in the collection.

(cherry picked from commit d1352702f9)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-06 17:02:36 +01:00
patchback[bot]
c1ba162ec0 [PR #11387/d4089ca2 backport][stable-12] Update RHEL 9.x to 9.7 in CI (#11389)
Update RHEL 9.x to 9.7 in CI (#11387)

* Update RHEL 9.x to 9.7 in CI.

* Add skips.

(cherry picked from commit d4089ca29a)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-06 17:02:28 +01:00
patchback[bot]
2d99eb92de [PR #11376/75234597 backport][stable-12] Support diff mode for netcup-dns module (#11378)
Support diff mode for netcup-dns module (#11376)

* support diff mode for netcup-dns module

* Fix issue with yaml encoding after testing

* Add changelog fragment

* Fixed: proper and robust yaml import

* Remove need for yaml import

* Show whole zone in diff for context

* Update changelogs/fragments/11376-netcup-dns-diff-mode.yml



* Update plugins/modules/netcup_dns.py



---------


(cherry picked from commit 75234597bc)

Co-authored-by: mqus <8398165+mqus@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-05 18:58:17 +01:00
patchback[bot]
e0bd7e334e [PR #11379/b3dc06a7 backport][stable-12] Clean up other Python files (#11382)
Clean up other Python files (#11379)

* Address issues found by ruff check.

* Make mypy happy; remove some Python 2 compat code.

* Also declare port1.

(cherry picked from commit b3dc06a7dd)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-05 18:57:50 +01:00
Felix Fontein
1d09a36e0f Fix version number.
(cherry picked from commit 00d2785794)
2026-01-05 18:57:41 +01:00
patchback[bot]
bb6d5fb735 [PR #11377/c00fb4fb backport][stable-12] cloudflare_dns: also allow 128 as a value for flag (#11383)
cloudflare_dns: also allow 128 as a value for flag (#11377)

* Also allow 128 as a value for flag.

* Forgot to add changelog fragment.

(cherry picked from commit c00fb4fb5c)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-05 18:57:22 +01:00
patchback[bot]
8242f6fa46 [PR #11301/13035e2a backport][stable-12] Add support for multiple managers to get_manager_attributes command in idrac_redfish_info module (#11375)
Add support for multiple managers to get_manager_attributes command in idrac_redfish_info module (#11301)

* Update get_manager_attributes method to support systems with multiple managers present

Fixes https://github.com/ansible-collections/community.general/issues/11294

* Add changelog fragment
Pre-define reponse for get_manager_attributes method

* Update changelogs/fragments/11301-idrac-info-multi-manager.yml

Update per suggestion!



* Update plugins/modules/idrac_redfish_info.py
Remove extra manager quantity check



---------


(cherry picked from commit 13035e2a2c)

Co-authored-by: Scott Seekamp <13857911+sseekamp@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-02 19:29:33 +01:00
patchback[bot]
88eee5fbb4 [PR #11357/ddf05104 backport][stable-12] Add missing integration test aliases files (#11372)
Add missing integration test aliases files (#11357)

* Add missing aliases files.

* Fix directory name.

* Add another missing aliases file.

* Adjust test to also work with newer jsonpatch versions.

(cherry picked from commit ddf05104f3)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-01-02 15:03:07 +01:00
patchback[bot]
5945c56b4c [PR #11369/20ba59cc backport][stable-12] Added "See Also" section (#11374)
Added "See Also" section (#11369)

* Added "See Also" section

* Corrected seealso documentation

* Update ini_file.py

Removed seealso descriptions

* Update to_ini.py

Removed seealso descriptions

* Update from_ini.py

Removed seealso descriptions

(cherry picked from commit 20ba59cce6)

Co-authored-by: daomah <129229601+daomah@users.noreply.github.com>
2026-01-02 15:02:56 +01:00
patchback[bot]
4514e271af [PR #11346/61d794f1 backport][stable-12] incus conn plugin: improve readability (was ruff: set target-python 3.7) (#11353)
incus conn plugin: improve readability (was ruff: set target-python 3.7) (#11346)

* incus connection plugin: improve readability

* add changelog frag

* Update plugins/connection/incus.py



* Update plugins/connection/incus.py

* Update plugins/connection/incus.py



---------


(cherry picked from commit 61d794f171)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-31 08:40:42 +01:00
patchback[bot]
f49c5f79a7 [PR #11343/e8f2b135 backport][stable-12] batch 3 - update Python idiom to 3.7 using pyupgrade (#11352)
batch 3 - update Python idiom to 3.7 using pyupgrade (#11343)

* batch 3 - update Python idiom to 3.7 using pyupgrade

* add changelog frag

* bring back sanity

* adjust test

* Apply suggestions from code review

(cherry picked from commit e8f2b135ba)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-30 22:43:24 +01:00
patchback[bot]
2d07481e64 [PR #11341/5b5f7e9e backport][stable-12] batch 1 - update Python idiom to 3.7 using pyupgrade (#11349)
batch 1 - update Python idiom to 3.7 using pyupgrade (#11341)

* batch 1 - update Python idiom to 3.7 using pyupgrade

* add changelog frag

* add changelog frag

(cherry picked from commit 5b5f7e9e64)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-30 16:47:11 +01:00
patchback[bot]
41f815be57 [PR #11344/543329ce backport][stable-12] batch 4 - update Python idiom to 3.7 using pyupgrade (#11350)
batch 4 - update Python idiom to 3.7 using pyupgrade (#11344)

* batch 4 - update Python idiom to 3.7 using pyupgrade

* add changelog frag

* bring back sanity

* remove unused import

(cherry picked from commit 543329cecb)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-30 16:31:29 +01:00
patchback[bot]
8d4e702d89 [PR #11340/a0d3bac8 backport][stable-12] cronvar: simplify exception raise - remove import sys (#11348)
cronvar: simplify exception raise - remove import sys (#11340)

* cronvar: simplify exception raise - remove import sys

* add changelog frag

(cherry picked from commit a0d3bac88c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-30 16:20:46 +01:00
patchback[bot]
303bac630a [PR #11342/266d9d3f backport][stable-12] batch 2 - update Python idiom to 3.7 using pyupgrade (#11345)
batch 2 - update Python idiom to 3.7 using pyupgrade (#11342)

* batch 2 - update Python idiom to 3.7 using pyupgrade

* Apply suggestions from code review

(cherry picked from commit 266d9d3fb0)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-30 16:00:22 +01:00
Felix Fontein
213dc22217 The next expected release will be 12.3.0. 2025-12-29 15:27:28 +01:00
Felix Fontein
8919a545d3 Release 12.2.0. 2025-12-29 14:47:41 +01:00
patchback[bot]
59b6126320 [PR #11204/6ae47590 backport][stable-12] lxc_container: replace subprocess.Popen() with run_command() (#11339)
lxc_container: replace subprocess.Popen() with run_command() (#11204)

* lxc_container: replace subprocess.Popen() with run_command()

* Update plugins/modules/lxc_container.py



* add changelog frag

* retain Popen logic in module_utils

* Update plugins/module_utils/_lxc.py



---------


(cherry picked from commit 6ae47590cd)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-29 12:00:12 +01:00
patchback[bot]
3778ec8000 [PR #11328/18c362ee backport][stable-12] add devcontainer+pre-commit (#11338)
add devcontainer+pre-commit (#11328)

* add devcontainer support

* chore(devcontainer): install test requirements

* chore: add pre-commit

* fix format of pre-commit config file

* add licenses for the new files

* Apply suggestions from code review

* move requirements-dev.txt to inside .devcontainer

* specify files for ruff

* update CONTRIBUTING.md

* chore(devcontainer): use standard image, no docker build

* docs: format CONTRIBUTING.md (automatic by IDE)

* Update .devcontainer/devcontainer.json

* remove extraneous edits in CONTRIBUTING.md

(cherry picked from commit 18c362eef4)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-29 11:35:35 +01:00
patchback[bot]
440ee9c3fe [PR #11311/4fe129a0 backport][stable-12] Adding support for the Scaleway SCW_PROFILE environment variable. (#11336)
Adding support for the Scaleway SCW_PROFILE environment variable. (#11311)

* Adding support for the Scaleway SCW_PROFILE environment variable.

* Adding changelog fragment.

* Adding documentation for the environment variable.

* Adding SCW_PROFILE as a proper environment variable via the DOCUMENTATION block.

* Updating changelog fragment.

(cherry picked from commit 4fe129a0ed)

Co-authored-by: Greg Harvey <greg.harvey@gmail.com>
2025-12-28 21:25:02 +01:00
patchback[bot]
39a49e4d98 [PR #11314/b3c066b9 backport][stable-12] Adding scw_profile parameter to Scaleway module utilities. (#11337)
Adding scw_profile parameter to Scaleway module utilities. (#11314)

* Adding scw_profile parameter to Scaleway module utilities.

* Setting param name to profile for consistency and adding scw_profile as an alias.

* Adding changelog fragment.

* Forgot to import 'os' library.

* Type in variable type for Scaleway profile.

* Also forgot to include the yaml library, code taking from plugins/inventory/scaleway.py.

* Adding default 'profile' value of empty string and changing check to a length check.

* Treated wrong variable, checking XDG_CONFIG_HOME is a string.

* Explicitly setting default of environment path vars to empty strings instead of None.

* Letting ruff reformat the dict for 'profile'.

* Changes from code review.

* Fixing ruff formatting issue with error message.

* Properly catching PyYAML import issues.

* Adding PyYAML requirement when 'profile' is used.

* Ruff wants an extra line after the PyYAML import code.

* Fixing PyYAML dependency code as per review.

* Removing extraneous var declaration.

* Moving SCW_CONFIG loading to a function.

* Fixing type errors with os.getenv calls.

* Cannot send None to os.path.exists() or open().

* Oops, inversed logic!

* Setting os.getenv() default to empty string so it is never None.



* None check no longer needed as scw_config_path is never None.



---------


(cherry picked from commit b3c066b99f)

Co-authored-by: Greg Harvey <greg.harvey@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-28 21:24:54 +01:00
patchback[bot]
51d8c4b5fd [PR #11332/280d269d backport][stable-12] fix: listen_ports_facts return no facts when using with podman (#11335)
fix: listen_ports_facts return no facts when using with podman (#11332)

* fix: listen_ports_facts return no facts when using with podman

* Update changelogs/fragments/listen-ports-facts-return-no-facts.yml



---------


(cherry picked from commit 280d269d78)

Co-authored-by: Daniel Gonçalves <dangoncalves@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-28 21:24:45 +01:00
patchback[bot]
f1fbdd4a6c [PR #11309/9f5114dc backport][stable-12] keycloak_userprofile: Add missing selector option (#11333)
keycloak_userprofile: Add missing selector option (#11309)

* Add selector option

* Add fragment

* Formatting

(cherry picked from commit 9f5114dc76)

Co-authored-by: maxblome <53860633+maxblome@users.noreply.github.com>
2025-12-28 10:00:10 +01:00
Felix Fontein
5f44c4ed50 Add reformat commit to .git-blame-ignore-revs. 2025-12-27 16:31:19 +01:00
patchback[bot]
e530d2906a [PR #11329/d549baa5 backport][stable-12] straight up: ruff format (#11330)
straight up: ruff format (#11329)

* straight up: ruff format

* Apply suggestions from code review

(cherry picked from commit d549baa5e1)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-27 16:29:02 +01:00
patchback[bot]
18c53b8bd1 [PR #11325/04d0a4da backport][stable-12] lxc_container: rearrange docs notes (#11327)
lxc_container: rearrange docs notes (#11325)

* lxc_container: rearrange docs notes

* Update plugins/modules/lxc_container.py

* reformat docs

(cherry picked from commit 04d0a4daf3)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-25 08:29:33 +01:00
patchback[bot]
bff9afc1f1 [PR #11323/ec6b7bf9 backport][stable-12] lxc_container: use tempfile.TemporaryDirectory (#11326)
lxc_container: use tempfile.TemporaryDirectory (#11323)

* lxc_container: use tempfile.TemporaryDirectory

* add changelog frag

* typo

(cherry picked from commit ec6b7bf91c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-25 08:29:23 +01:00
patchback[bot]
374ba70c37 [PR #11320/99b9680e backport][stable-12] Announce making all module utils, plugin utils, and doc fragments private (#11321)
Announce making all module utils, plugin utils, and doc fragments private (#11320)

Announce making all module utils, plugin utils, and doc fragments private.

(cherry picked from commit 99b9680ea2)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-24 08:44:36 +01:00
Felix Fontein
9da26ea3eb Prepare 12.2.0. 2025-12-23 21:38:47 +01:00
patchback[bot]
c6689b6245 [PR #11316/3debc968 backport][stable-12] Fixing documentation for scaleway_private_network module. (#11319)
Fixing documentation for scaleway_private_network module. (#11316)

(cherry picked from commit 3debc968a4)

Co-authored-by: Greg Harvey <greg.harvey@gmail.com>
2025-12-23 14:20:00 +01:00
patchback[bot]
8860faaa1c [PR #11120/61b559c4 backport][stable-12] add sssd_info module (#11306)
add sssd_info module (#11120)

* add sssd_info module

* fix f-stings and remove support python2

* fix imports custom lib

* fix whitespace and add missing_required_lib

* fix str and add version

* try add mock test

* fix module and mock tests check

* fix required in main module

* fix spaces

* fix linters

* add final newline

* fix version of module

* fix description and error handling

* swap literal to dict

* fix str

* remove comment in methods

* remove _get in methods

* fix name method in test

* add botmeta

* fix description of server_type

* fix name of maintainer

* remove choices

* fix author

* fix type hint

* fix result

* fix spaces

* fix choices and empty returns

* fix mypy test result

* fix result

* run andebox yaml-doc

* remake simple try/exc for result

* fix tests

* add any type for testing mypy

* ruff formated

* fix docs

* remove unittest.main

* rename acc on git for official name

---------


(cherry picked from commit 61b559c4fd)

Co-authored-by: Aleksandr Gabidullin <101321307+a-gabidullin@users.noreply.github.com>
Co-authored-by: Александр Габидуллин <agabidullin@astralinux.ru>
2025-12-22 16:02:13 +01:00
patchback[bot]
824cb0e0a4 [PR #11304/02b18593 backport][stable-12] Remove unittest.main() calls (#11305)
Remove unittest.main() calls (#11304)

Remove unittest.main() calls.

(cherry picked from commit 02b185932c)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-22 16:01:22 +01:00
patchback[bot]
78625d1cd2 [PR #11170/2c6746ff backport][stable-12] ip2location_info: New Module - ip2location.io for IP geolocation lookup (#11303)
ip2location_info: New Module - ip2location.io for IP geolocation lookup (#11170)

* Added ip2location.io for IP geolocation lookup.

* Removed tab in last line.

* Added "ip2location" as maintainer.

* Update plugins/modules/ip2locationio_facts.py



* Update plugins/modules/ip2locationio_facts.py



* Update plugins/modules/ip2locationio_facts.py



* Update plugins/modules/ip2locationio_facts.py



* Update plugins/modules/ip2locationio_facts.py



* Update plugins/modules/ip2locationio_facts.py



* Update plugins/modules/ip2locationio_facts.py



* Added "typing" library.

* Updated import position.

* Reformatted.

* Added unit test.

* Updated documentation to add "ip" parameter.

* Renamed module from "ip2location_facts" to "ip2location_info".

* Updated version number.



* Update plugins/modules/ip2location_info.py



* Update plugins/modules/ip2location_info.py



* Updated return definition.

* Update BOTMETA.yml to latest module name.

* Update plugins/modules/ip2location_info.py



* Update plugins/modules/ip2location_info.py



* Removed extra parameter from "fetch_url".

* Fixed "test_ip2location_info.py" with formatter.

---------


(cherry picked from commit 2c6746ffa0)

Co-authored-by: IP2Location <support@ip2location.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-22 11:14:16 +01:00
patchback[bot]
a15ec28169 [PR #11285/a55884c9 backport][stable-12] Add support for missing validations in keycloak_userprofile (#11302)
Add support for missing validations in keycloak_userprofile (#11285)

* add missing validations-parameters as config options and add documentation for them; fixes https://github.com/ansible-collections/community.general/issues/9048

* fix parameter names

* extend unit tests

* support for camel casing for new validations and add changelog fragment

* Fix fragment format

* add 'version_added' documentation

* Update changelogs/fragments/11285-extended-keycloak-user-profile-validations.yml

mention fixed issue in fragment



* fix ruff formatting

---------


(cherry picked from commit a55884c921)

Co-authored-by: nwintering <33374766+nwintering@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-22 11:13:57 +01:00
patchback[bot]
12637fbd23 [PR #11295/a5aec7d6 backport][stable-12] Fix typo in auth_username in examples (#11300)
Fix typo in auth_username in examples (#11295)

(cherry picked from commit a5aec7d61a)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
2025-12-19 21:10:08 +01:00
patchback[bot]
21aa086ca6 [PR #11283/ef632145 backport][stable-12] Add more module_utils typing (#11289)
Add more module_utils typing (#11283)

Add more module_utils typing.

(cherry picked from commit ef632145e9)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-17 21:24:55 +01:00
patchback[bot]
8590184232 [PR #11291/4632e3d5 backport][stable-12] aix_*: docs adjustments (#11292)
aix_*: docs adjustments (#11291)

(cherry picked from commit 4632e3d5ee)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-16 07:02:24 +01:00
patchback[bot]
19257bac40 [PR #11284/df349459 backport][stable-12] keycloak_authentication_required_actions: fix examples (#11288)
keycloak_authentication_required_actions: fix examples (#11284)

The correct parameter name is "required_actions" (plural).

(cherry picked from commit df34945991)

Co-authored-by: Samuli Seppänen <samuli.seppanen@puppeteers.net>
2025-12-15 19:25:06 +01:00
patchback[bot]
79120c0f96 [PR #11277/1b15e595 backport][stable-12] use FQCN for extending docs with files and url (#11281)
use FQCN for extending docs with files and url (#11277)

* use FQCN for extending docs with files and url

* remove typo

(cherry picked from commit 1b15e595e0)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-14 12:31:44 +01:00
patchback[bot]
efe3462856 [PR #11276/a96a5c44 backport][stable-12] sysrc tests: skip FreeBSD 14.2 for ezjail tests (#11280)
sysrc tests: skip FreeBSD 14.2 for ezjail tests (#11276)

Looks like 14.2 no longer works.

(cherry picked from commit a96a5c44a5)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-14 12:06:03 +01:00
patchback[bot]
0cff5dec9f [PR #11265/d4249071 backport][stable-12] apk: fix packages return value for apk-tools >= 3 (fix #11264) (#11272)
apk: fix packages return value for apk-tools >= 3 (fix #11264) (#11265)

* apk: fix packages return value for apk-tools >= 3 (fix #11264)

* Add changelog fragment

(cherry picked from commit d424907172)

Co-authored-by: s-hamann <10639154+s-hamann@users.noreply.github.com>
2025-12-10 13:40:05 +01:00
patchback[bot]
0280b1ca5d [PR #11255/ac37544c backport][stable-12] monit: investigating tests again - using copilot on this one (#11271)
monit: investigating tests again - using copilot on this one (#11255)

* add monit version to successful exit

* install the standard monit - if 5.34, then bail out

* add 3sec wait after service restart

- that restart happens exactly before the task receiving the SIGTERM, so maybe, just maybe, it just needs time to get ready for the party

* wait for monit initialisation after restart

* monit tests: check service-specific status in readiness wait

The wait task was checking 'monit status' (general), but the actual
failing command is 'monit status -B httpd_echo' (service-specific).
This causes a race where general status succeeds but service queries
fail. Update to check the exact command format that will be used.

* monit tests: remove 5.34.x version restriction

The version restriction was based on incorrect diagnosis. The actual
issue was the readiness check validating general status instead of
service-specific queries. Now that we check the correct command
format, the tests should work across all monit versions.

* monit tests: add stabilization delay after readiness check

After the readiness check succeeds, add a 1-second pause before
running actual tests. Monit 5.34.x and 5.35 appear to have a
concurrency issue where rapid successive 'monit status -B' calls
can cause hangs even though the first call succeeds.

* monit tests: add retry logic for state changes to handle monit daemon hangs

Monit daemon has an intermittent concurrency bug across versions 5.27-5.35
where 'monit status -B' commands can hang (receiving SIGTERM) even after
the daemon has successfully responded to previous queries. This appears
to be a monit daemon issue, not a timing problem.

Add retry logic with 2-second delays to the state change task to work
around these intermittent hangs. Skip retries if the failure is not
SIGTERM (rc=-15) to avoid masking real errors.

* monit tests: capture and display monit.log for debugging

Add tasks in the always block to capture and display the monit log file.
This will help diagnose the intermittent hanging issues by showing what
monit daemon was doing when 'monit status -B' commands hang.

* monit tests: enable verbose logging (-v flag)

Modify the monit systemd service to start with -v flag for verbose
logging. This should provide more detailed information in the monit
log about what's happening when status commands hang.

* monit: add 0.5s delay after state change command

After extensive testing and analysis with verbose logging enabled, identified
that monit's HTTP interface can become temporarily unresponsive immediately
after processing state change commands (stop, start, restart, etc.).

This manifests as intermittent SIGTERM (rc=-15) failures when the module
calls 'monit status -B <service>' to verify the state change. The issue
affects all monit versions tested (5.27-5.35) and is intermittent, suggesting
a race condition or brief lock in monit's HTTP request handling.

Verbose logging confirmed:
- State change commands complete successfully
- HTTP server reports as 'started'
- But subsequent status checks can hang without any log entry

Adding a 0.5 second sleep after sending state change commands gives the
monit daemon time to fully process the command and become responsive again
before the first status verification check.

This complements the existing readiness check after daemon restart and
the retry logic for SIGTERM failures in the tests.

* tests(monit): remove workarounds after module race condition fix

After 10+ successful CI runs with no SIGTERM failures, removing test-level
workarounds that are now redundant due to the 0.5s delay fix in the module:

- Remove 1-second stabilization pause after daemon restart
  The module's built-in 0.5s delay after state changes makes this unnecessary

- Remove retry logic for SIGTERM failures in state change tests
  The race condition is now prevented at the module level

- Remove verbose logging setup and log capture
  Verbose mode didn't log HTTP requests, so it didn't help diagnose the issue
  and adds unnecessary overhead

Kept the readiness check with retries after daemon restart - still needed
to validate daemon is responsive after service restart (different scenario
than the state change race condition).

* restore tasks/main.yml

* monit tests: reduce readiness check retries from 60 to 10

After successful CI runs, observed that monit daemon becomes responsive
within 1-2 seconds after restart. The readiness check typically passes
on the first attempt.

Reducing from 60 retries (30s timeout) to 10 retries (5s timeout) is
more appropriate and allows tests to fail faster if something is
genuinely broken.

* add changelog frag

* Update changelogs/fragments/11255-monit-integrationtests.yml



---------


(cherry picked from commit ac37544c53)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-10 13:39:54 +01:00
patchback[bot]
ab95360b97 [PR #11260/a977c6f7 backport][stable-12] fix(sanitize_cr): avoid crash when realmrep is empty (#11268)
fix(sanitize_cr): avoid crash when realmrep is empty (#11260)

* fix(docs): missing info on id when creating realms

* fix(sanitize_cr): avoid crash when realmrep is empty

* remove unrelated change

* remove unrelated change

* added changlog

* correct: changelogs

* Update changelogs



---------



(cherry picked from commit a977c6f7c1)

Co-authored-by: Guillaume Dorschner <44686652+GuillaumeDorschner@users.noreply.github.com>
Co-authored-by: Guillaume Dorschner <guillaume.dorschner@thalesgroup.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-08 23:06:16 +01:00
patchback[bot]
ae7656b8da [PR #11256/a9540f93 backport][stable-12] keycloak_user_rolemapping: fix: failling to assign role to user (#11263)
keycloak_user_rolemapping: fix: failling to assign role to user (#11256)

* docs: clarify keycloak documentation example section with uid

* fix: allow assign role to user

* Add changelog frag

* Update changelogs/fragments/11256-fix-keycloak-roles-mapping.yml



---------



(cherry picked from commit a9540f93d2)

Co-authored-by: Guillaume Dorschner <44686652+GuillaumeDorschner@users.noreply.github.com>
Co-authored-by: Guillaume Dorschner <guillaume.dorschner@thalesgroup.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-06 13:53:04 +01:00
patchback[bot]
216e9e28c3 [PR #11258/0ef3eac0 backport][stable-12] iptables_state: get rid of temporary files (#11262)
iptables_state: get rid of temporary files (#11258)

Get rid of temporary files.

(cherry picked from commit 0ef3eac0f4)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-06 13:52:41 +01:00
patchback[bot]
7309650d26 [PR #11245/3d25aac9 backport][stable-12] monit: use enum (#11252)
monit: use enum (#11245)

* monit: use enum

* make mypy happy about the var type

* add changelog frag

* typo - this is getting frequent

(cherry picked from commit 3d25aac978)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-02 22:18:19 +01:00
patchback[bot]
ee2963d1ee [PR #11182/76589bd9 backport][stable-12] nmcli: allow VxLan multicast and bridge port (#11251)
nmcli: allow VxLan multicast and bridge port (#11182)

VxLan virtual devices can be added to bridge ports, like any other
devices. And when using multicast remote addresses,
NetworkManager need to know the parent device as well.


(cherry picked from commit 76589bd97a)

Co-authored-by: Tiziano Müller <tm@dev-zero.ch>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-02 21:41:33 +01:00
patchback[bot]
469ba26c8f [PR #11240/8d51c5f6 backport][stable-12] btrfs module utils: pass command as list to run_command() (#11248)
btrfs module utils: pass command as list to `run_command()` (#11240)

* btrfs module utils: pass command as list to run_command()

* add changelog frag

(cherry picked from commit 8d51c5f666)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-02 21:17:59 +01:00
patchback[bot]
5cb24c2599 [PR #11242/0a802ecd backport][stable-12] deps module util: use Enum to represent states (#11247)
deps module util: use Enum to represent states (#11242)

* deps module util: use Enum to represent states

* add changelog frag

(cherry picked from commit 0a802ecdcb)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-02 21:17:43 +01:00
Felix Fontein
4621fce535 The next expected release will be 12.2.0. 2025-12-01 22:00:10 +01:00
Felix Fontein
f16dbd6b56 Release 12.1.0. 2025-12-01 21:22:38 +01:00
patchback[bot]
377a599372 [PR #11222/c7f6a28d backport][stable-12] Add basic typing for module_utils (#11243)
Add basic typing for module_utils (#11222)

* Add basic typing for module_utils.

* Apply some suggestions.



* Make pass again.

* Add more types as suggested.

* Normalize extra imports.

* Add more type hints.

* Improve typing.

* Add changelog fragment.

* Reduce changelog.

* Apply suggestions from code review.



* Fix typo.

* Cleanup.

* Improve types and make type checking happy.

* Let's see whether older Pythons barf on this.

* Revert "Let's see whether older Pythons barf on this."

This reverts commit 9973af3dbe.

* Add noqa.

---------


(cherry picked from commit c7f6a28d89)

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-01 21:16:37 +01:00
patchback[bot]
a2c7f9f89a [PR #11235/fb2f34ba backport][stable-12] Stop re-defining the argument spec in unit tests (#11239)
Stop re-defining the argument spec in unit tests (#11235)

* Stop re-defining the argument spec in unit tests.

* Shut up linter.

(cherry picked from commit fb2f34ba85)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-01 07:21:24 +01:00
patchback[bot]
3033dfa27c [PR #11231/16d51a82 backport][stable-12] remove % templating (#11237)
remove % templating (#11231)

* remove % templating

* add changelog frag

* suggestions from review

* remove unused import

(cherry picked from commit 16d51a8233)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-12-01 07:21:07 +01:00
patchback[bot]
721d2bd35d [PR #11198/6365b5a9 backport][stable-12] lxd_storage_pool_info, lxd_storage_volume_info: new modules (#11238)
lxd_storage_pool_info, lxd_storage_volume_info: new modules  (#11198)

* Fix mistaken rebase

* plugins/modules/lxd_storage_: include error codes, clean up notes

* plugins/modules/lxd_storage_: snap_url, ruff fix

* plugins/modules/lxd_storage_volume_info.py: remove checks on expected api returned bits

* plugins/modules/lxd_storage_volume_info.py: required: true

* tests/integration/targets/lxd_storage_volume_info/tasks/main.yaml: add Test fetching specific volume by name

* tests/unit/plugins/modules/test_lxd_storage_: add unit tests

* tests/integration/targets/lxd_storage_pool_info/tasks/main.yaml: add integratio tests

* tests/integration/targets/lxd_storage_: not required

* tests/integration/targets/lxd_storage_: not required perhaps, lxd_project has them

* tests/unit/plugins/modules/test_lxd_storage_volume_info.py: fix python3.8 tests

* tests/unit/plugins/modules/test_lxd_storage_pool_info.py: fix python3.8

* tests/integration/targets/lxd_storage_: correct paths for aliases

* tests/unit/plugins/modules/test_lxd_storage_volume_info.py: remove backticks

* tests/unit/plugins/modules/test_lxd_storage_volume_info.py: remove blank line

* tests/unit/plugins/modules/test_lxd_storage_: python3.8 changes

* tests/unit/plugins/modules/test_lxd_storage_: python3.8 changes

* tests/unit/plugins/lookup/test_github_app_access_token.py: restore

* tests/unit/plugins/connection/test_wsl.py: restore

* plugins/modules/lxd_storage_: use ANSIBLE_LXD_DEFAULT_SNAP_URL and put API version into const

* lxd_storage_volume_info: use recursion to gather all volume details

* tests/integration/targets/lxd_storage_volume_info/tasks/main.yaml: fix silet skipped failures

* tests/integration/targets/lxd_storage_pool_info/tasks/main.yaml: fix silet failures

* lxd_storage_pool_info: update to use recursion to gather all details in one shot

* Remove unnecessary change.

---------


(cherry picked from commit 6365b5a981)

Co-authored-by: Sean McAvoy <seanmcavoy@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-12-01 07:20:55 +01:00
Felix Fontein
42e2b5147f [stable-12] Remove no longer needed _mount module util (#11232) (#11236)
Remove no longer needed _mount module util (#11232)

Remove no longer needed _mount module util.

(cherry picked from commit d30428ac71)
2025-12-01 07:20:47 +01:00
patchback[bot]
3d42ad4c6c [PR #11172/ebcad7e6 backport][stable-12] zfs: mark change correctly when updating properties whose current value differs, even if they already have a non-default value (Fixes #11019) (#11234)
zfs: mark change correctly when updating properties whose current value differs, even if they already have a non-default value (Fixes #11019) (#11172)

* zfs - mark change correctly when updating properties whose current value differs, even if they already have a non-default value (https://github.com/ansible-collections/community.general/issues/11019).



* changelog: rename fragment to match PR number



* Update changelogs/fragments/11172-zfs-changed-extra-props.yml



---------



(cherry picked from commit ebcad7e6d1)

Signed-off-by: handisyde <github@handisyde.com>
Co-authored-by: Paul Mercier-Handisyde <33284285+handisyde@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-30 15:03:32 +01:00
patchback[bot]
fb00ba1b0a [PR #11229/f2783967 backport][stable-12] fix couple of f-string mishaps (#11230)
fix couple of f-string mishaps (#11229)

* fix couple of f-string mishaps

* add changelog frag

* fix insanity

(cherry picked from commit f27839673c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-30 08:38:34 +01:00
Felix Fontein
cdfc73b059 Prepare 12.1.0. 2025-11-30 08:37:19 +01:00
patchback[bot]
5b06814575 [PR #11199/22a4f8e2 backport][stable-12] Added support for Windows VM with Incus connection. (#11227)
Added support for Windows VM with Incus connection. (#11199)

* Added support for Windows VM with Incus connection.

* Update changelogs/fragments/11199-incus-windows.yml



* Attempt to fix the argument splitting.

* Only split on the first occurrence of the command argument

* Applying nox

---------


(cherry picked from commit 22a4f8e272)

Co-authored-by: Marc Olivier Bergeron <mbergeron28@proton.me>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-29 15:31:03 +01:00
patchback[bot]
69fc892002 [PR #11197/379db770 backport][stable-12] keycloak_realm: add webAuthnPolicyPasswordlessPasskeysEnabled param (#11228)
keycloak_realm: add webAuthnPolicyPasswordlessPasskeysEnabled param (#11197)

* keycloak_realm: add webAuthnPolicyPasswordlessPasskeysEnabled param

* Changelog Fragment - 11197

* Apply suggestions from code review



* Fix typo in changelog fragment filename

---------


(cherry picked from commit 379db770c5)

Co-authored-by: Christer Warén <cwchristerw@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-29 15:30:51 +01:00
patchback[bot]
8ae47d3a8d [PR #11223/d550baac backport][stable-12] fix ruff case UP031 (#11226)
fix ruff case UP031 (#11223)

* fix ruff case UP031

* refactor backslashout of f-string for the sake of old Pythons

* add changelog frag

* Update plugins/modules/imc_rest.py



* scaleway_user_data: fix bug and make it an f-string

* reformat

---------


(cherry picked from commit d550baacfa)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-29 14:16:53 +01:00
patchback[bot]
17d2a089a0 [PR #11224/1ab9be15 backport][stable-12] pushbullet: deprecation (#11225)
pushbullet: deprecation (#11224)

* pushbullet: deprecation

* add changelog frag

(cherry picked from commit 1ab9be152f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-27 22:08:36 +01:00
patchback[bot]
e4261abab0 [PR #11174/86d6ef8d backport][stable-12] Allow None value maximum_timeout for gitlab_runner (#11218)
Allow None value maximum_timeout for gitlab_runner (#11174)

* change maximum_timeout type to raw

* allow None value for maximum_timeout in update_runner

* add changelog

* update changelog fragment formatting

* convert maximum_timeout value of 0 to None

* fix sanity check errors

* add suggested doc changes

* Note version required for timeout disable

---------


(cherry picked from commit 86d6ef8d0e)

Co-authored-by: colin93 <33459498+colin93@users.noreply.github.com>
Co-authored-by: colin <cosulli3@jaguarlandrover.com>
2025-11-25 22:11:28 +01:00
patchback[bot]
cf94d4b01e [PR #11216/6b4100d7 backport][stable-12] CONTRIBUTING.md: fixes/improvements (#11221)
CONTRIBUTING.md: fixes/improvements (#11216)

* CONTRIBUTING.md: fixes/improvements

* Update CONTRIBUTING.md



---------


(cherry picked from commit 6b4100d70f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-25 22:10:45 +01:00
patchback[bot]
8930d03c7c [PR #11215/862fe79a backport][stable-12] fix ruff case SIM110 (#11217)
fix ruff case SIM110 (#11215)

* fix ruff case SIM110

* Update plugins/module_utils/xenserver.py



* add changelog frag

---------


(cherry picked from commit 862fe79a22)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-25 21:59:06 +01:00
patchback[bot]
e741e22ec4 [PR #11206/cbf13ab6 backport][stable-12] Fix crash in module_utils.datetime.fromtimestamp() (#11212)
Fix crash in module_utils.datetime.fromtimestamp() (#11206)

Fix crash in module_utils.datetime.fromtimestamp().

(cherry picked from commit cbf13ab6c9)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-25 21:42:22 +01:00
patchback[bot]
1e9217197f [PR #11205/d364e354 backport][stable-12] Deprecate unused module utils (#11213)
Deprecate unused module utils (#11205)

Deprecate unused module utils.

(cherry picked from commit d364e35423)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-25 21:41:56 +01:00
patchback[bot]
8da2ff61d5 [PR #11179/ebb53416 backport][stable-12] mas: Fix parsing on mas 3.0.0+. (#11210)
mas: Fix parsing on mas 3.0.0+. (#11179)

* mas: Fix parsing on mas 3.0.0+.

`mas` changed the formatting of `mas list` with version 3, which breaks
the parsing this module uses to determine which apps are installed.  In
particular, app IDs may now have leading space, which causes us to split
the string too early.

* Changelog fragment.

* Better format examples and changlog fragment.

(cherry picked from commit ebb534166e)

Co-authored-by: Michael Galati <11300961+leetoburrito@users.noreply.github.com>
2025-11-25 06:43:06 +01:00
patchback[bot]
9c57bb4f60 [PR #11192/64dc009e backport][stable-12] solaris_zone: replace os.system() with run_command() (#11207)
solaris_zone: replace os.system() with run_command() (#11192)

* solaris_zone: replace os.system() with run_command()

* add changelog frag

(cherry picked from commit 64dc009ea7)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-24 21:21:29 +01:00
patchback[bot]
f32bcd34ef [PR #11193/f2731e1d backport][stable-12] onepassword_info: replace subprocess.Popen() with run_command() (#11208)
onepassword_info: replace subprocess.Popen() with run_command() (#11193)

* onepassword_info: replace subprocess.Popen() with run_command()

* add changelog frag

* Update plugins/modules/onepassword_info.py



---------


(cherry picked from commit f2731e1dac)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-24 21:21:21 +01:00
patchback[bot]
cc7ba7938a [PR #11200/a8031562 backport][stable-12] Bump actions/checkout from 5 to 6 in the ci group (#11201)
Bump actions/checkout from 5 to 6 in the ci group (#11200)

Bumps the ci group with 1 update: [actions/checkout](https://github.com/actions/checkout).

Updates `actions/checkout` from 5 to 6
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v5...v6)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '6'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...



(cherry picked from commit a803156277)

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-11-24 18:15:35 +01:00
patchback[bot]
c0684e8a72 [PR #11148/7321ba49 backport][stable-12] snmp_facts: improvements (#11196)
snmp_facts: improvements (#11148)

* snmp_facts: improvements

* require level if vesion=v3

(cherry picked from commit 7321ba4990)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-23 13:28:23 +01:00
patchback[bot]
f648dca84a [PR #11189/1c678f5c backport][stable-12] fix ruff case UP030 (#11195)
fix ruff case UP030 (#11189)

* fix ruff case UP030

* add changelog frag

* formatting

* suggestion from review

(cherry picked from commit 1c678f5c07)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-23 08:58:57 +01:00
patchback[bot]
084ecd96e1 [PR #11190/9a3e26ad backport][stable-12] fix ruff case SIM112 (#11194)
fix ruff case SIM112 (#11190)

(cherry picked from commit 9a3e26ad98)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-23 08:58:35 +01:00
patchback[bot]
9cdeb5a9b9 [PR #11167/19757b3a backport][stable-12] Add type hints to action and test plugins and to plugin utils; fix some bugs, and improve input validation (#11191)
Add type hints to action and test plugins and to plugin utils; fix some bugs, and improve input validation (#11167)

* Add type hints to action and test plugins and to plugin utils. Also fix some bugs and add proper input validation.

* Combine lines.



* Extend changelog fragment.

* Move task_vars initialization up.

---------


(cherry picked from commit 19757b3a4c)

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-23 08:36:34 +01:00
patchback[bot]
dbcd0dc497 [PR #11185/4517b86e backport][stable-12] snmp_facts: update docs with dependency constraint (#11187)
snmp_facts: update docs with dependency constraint (#11185)

(cherry picked from commit 4517b86ed4)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-22 22:44:03 +01:00
patchback[bot]
0e73d6a593 [PR #11168/e57de70c backport][stable-12] Address UP014: use NamedTuple class syntax (#11183)
Address UP014: use NamedTuple class syntax (#11168)

* Address UP014: use NamedTuple class syntax.

* Convert type comments to type hints.

(cherry picked from commit e57de70c2a)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-21 18:42:35 +01:00
patchback[bot]
c9df20808d [PR #11032/af99cc7d backport][stable-12] Add New Module file_remove (#11184)
Add New Module file_remove (#11032)

* Add New Module file_remove

* Add fixes from code review

* Change file_type documentation

* Remove python to_native from the module

* Remove redundant block/always cleanup

* Update plugins/modules/file_remove.py



* Update plugins/modules/file_remove.py



* Update plugins/modules/file_remove.py



* Update plugins/modules/file_remove.py



* Update plugins/modules/file_remove.py



* Update plugins/modules/file_remove.py



* Update plugins/modules/file_remove.py



* Update plugins/modules/file_remove.py



* Add more nox fixes to latest review

* Update plugins/modules/file_remove.py

LGTM



* Update tests/integration/targets/file_remove/tasks/main.yml

Right, that's better.



* Fix EXAMPLES regex pattern

* Add warning when listed file was removed by other process during
playbook execution

* remove raise exception from find_matching_files;

* Update plugins/modules/file_remove.py



* Update plugins/modules/file_remove.py



---------


(cherry picked from commit af99cc7deb)

Co-authored-by: Shahar Golshani <sgolshan@redhat.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-21 18:33:37 +01:00
Felix Fontein
649b32759b [stable-12] docs: migrate RTD URLs to docs.ansible.com (#11109) (#11175)
docs: migrate RTD URLs to docs.ansible.com (#11109)

* docs: update readthedocs.io URLs to docs.ansible.com equivalents

🤖 Generated with Claude Code
https://claude.ai/code



* Adjust favicon URL.



---------




(cherry picked from commit d98df2d3a5)

Co-authored-by: John Barker <john@johnrbarker.com>
Co-authored-by: Claude <noreply@anthropic.com>
Co-authored-by: Don Naro <dnaro@redhat.com>
2025-11-19 18:22:29 +01:00
patchback[bot]
9d7855b844 [PR #11149/79b16d9c backport][stable-12] fix return value exception (#11173)
fix return value `exception` (#11149)

* fix return value `exception`

* add changelog frag

* adjustments after review

* typo

* adjust changelog frag

* vmadm: send rc, stdout, and stderr to fail_json()

* rundeck: pass tracebacks

* Update changelogs/fragments/11149-rv-exception.yml



* Update changelogs/fragments/11149-rv-exception.yml



---------


(cherry picked from commit 79b16d9ca5)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-18 17:57:16 +01:00
patchback[bot]
4480036401 [PR #11169/a986d81c backport][stable-12] dconf: doc typo (#11171)
dconf: doc typo (#11169)

(cherry picked from commit a986d81c3d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-17 07:09:50 +01:00
patchback[bot]
5ae8d2ccec [PR #11107/e20e32bb backport][stable-12] Extend failure message for merge_variables type detection (#11166)
Extend failure message for merge_variables type detection (#11107)

merge_variables: extend type detection failure message

Update the error message for the merge_variables lookup plugin in case an unsupported type is passed.

(cherry picked from commit e20e32bb87)

Co-authored-by: Roy Lenferink <lenferinkroy@gmail.com>
2025-11-17 06:47:04 +01:00
patchback[bot]
1f9d6787fb [PR #11046/98aca27a backport][stable-12] locale_gen: search for available locales in /usr/local as well (#11163)
locale_gen: search for available locales in /usr/local as well (#11046)

* locale_gen: search for available locales in /usr/local as well

* better var name

* add test for /usr/local

* Apply suggestions from code review



* skip /usr/local/ for Archlinux

* improve/update documentation

* add license file for the custom locale

* add changelog frag

* Update plugins/modules/locale_gen.py



* Update changelogs/fragments/11046-locale-gen-usrlocal.yml



---------


(cherry picked from commit 98aca27a8b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-17 06:46:42 +01:00
patchback[bot]
95175e056f [PR #11069/6c1676fc backport][stable-12] spotinst_aws_elastigroup: deprecation (#11103)
spotinst_aws_elastigroup: deprecation (#11069)

* spotinst_aws_elastigroup: deprecation

* add changelog frag

* add missing URL to chglog

* Update changelogs/fragments/11069-deprecate-spotinst.yml



* Update meta/runtime.yml



* Update plugins/modules/spotinst_aws_elastigroup.py



---------


(cherry picked from commit 6c1676fcbb)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-17 06:46:27 +01:00
patchback[bot]
44960de208 [PR #11087/6e1cc3ea backport][stable-12] swupd: deprecation (#11099)
swupd: deprecation (#11087)

* swupd: deprecation

* add changelog frag

* Update changelogs/fragments/11087-deprecate-swupd.yml



* Update meta/runtime.yml



* Update plugins/modules/swupd.py



---------


(cherry picked from commit 6e1cc3eafd)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-17 06:45:55 +01:00
patchback[bot]
7c46b7edbc [PR #11088/7f47deed backport][stable-12] dconf: deprecate fallback mechanism (#11094)
dconf: deprecate fallback mechanism (#11088)

* dconf: deprecate fallback mechanism

* add changelog frag

(cherry picked from commit 7f47deed64)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-17 06:45:44 +01:00
patchback[bot]
2fa8179df2 [PR #11070/37297f38 backport][stable-12] layman: deprecation (#11096)
layman: deprecation (#11070)

* layman: deprecation

* add changelog frag

* Update changelogs/fragments/11070-deprecate-layman.yml



---------


(cherry picked from commit 37297f38ae)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-17 06:45:30 +01:00
patchback[bot]
8a7f360558 [PR #11143/23e81b8d backport][stable-12] replace redundant to_native()/to_text() occurrences, batch 8 (#11164)
replace redundant to_native()/to_text() occurrences, batch 8 (#11143)

* replace redundant to_native()/to_text() occurrences, batch 8

* add changelog frag

* Update plugins/modules/jira.py



---------


(cherry picked from commit 23e81b8d30)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-16 07:09:40 +01:00
patchback[bot]
15d4fff749 [PR #11144/5617d57c backport][stable-12] xcc_redfish_command: fix messages showing dict keys (#11162)
xcc_redfish_command: fix messages showing dict keys (#11144)

* xcc_redfish_command: fix messages showing dict keys

* add changelog frag

* Update plugins/modules/xcc_redfish_command.py



* Update plugins/modules/xcc_redfish_command.py



* Apply suggestions from code review

* Update plugins/modules/xcc_redfish_command.py

---------


(cherry picked from commit 5617d57c8c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-15 23:17:16 +01:00
patchback[bot]
6d582acb26 [PR #11159/6bf0780d backport][stable-12] xfconf: update state=absent doc (#11160)
xfconf: update state=absent doc (#11159)

(cherry picked from commit 6bf0780d23)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-15 21:11:12 +01:00
patchback[bot]
4044998ff5 [PR #11154/53c62e7a backport][stable-12] Fix snmp_facts return value docs (#11158)
Fix snmp_facts return value docs (#11154)

Fix snmp_facts return value docs.

(cherry picked from commit 53c62e7a43)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-14 18:47:11 +01:00
patchback[bot]
b35f138976 [PR #11155/f401c68d backport][stable-12] remove redundant line from ruff.toml (#11156)
remove redundant line from ruff.toml (#11155)

(cherry picked from commit f401c68df3)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-14 07:36:10 +01:00
patchback[bot]
8fd40ed9e4 [PR #11150/32f0ad2f backport][stable-12] Fixed typo in decompress example documentation (#11153)
Fixed typo in decompress example documentation (#11150)

(cherry picked from commit 32f0ad2f97)

Co-authored-by: Thomas Löhr <tlhr@users.noreply.github.com>
2025-11-13 23:11:50 +01:00
patchback[bot]
02de34c46b [PR #11147/183aa6ed backport][stable-12] fix markup (#11151)
fix markup (#11147)

* fix markup for common return values

* Apply suggestion from review

(cherry picked from commit 183aa6ed6b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-13 19:42:08 +01:00
patchback[bot]
0813907a89 [PR #11145/255059f7 backport][stable-12] fix ruff case B015 (#11146)
fix ruff case B015 (#11145)

* fix ruff case B015

* add changelog frag

(cherry picked from commit 255059f7b3)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-13 06:30:21 +01:00
patchback[bot]
04fb53e8a3 [PR #11112/f5c2c8b9 backport][stable-12] replace redundant to_native()/to_text() occurrences, batch 7 (#11142)
replace redundant to_native()/to_text() occurrences, batch 7 (#11112)

* replace redundant to_native()/to_text() occurrences, batch 7

* add changelog frag

* made changes per review

(cherry picked from commit f5c2c8b9a2)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-13 06:09:17 +01:00
patchback[bot]
f1d9a2b134 [PR #11110/996d9a7f backport][stable-12] replace batch 6 of redundant to_native()/to_text() occurrences (#11141)
replace batch 6 of redundant to_native()/to_text() occurrences (#11110)

* replace batch 6 of redundant to_native()/to_text() occurrences

* add changelog frag

(cherry picked from commit 996d9a7f63)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-12 21:59:18 +01:00
patchback[bot]
619ea5b7b3 [PR #11106/f785e9c7 backport][stable-12] replace batch of redundant to_native()/to_text() occurrences (#11140)
replace batch of redundant to_native()/to_text() occurrences (#11106)

* replace batch of redundant to_native()/to_text() occurrences

* add changelog frag

* snap sanity

* rolling back snap for now

* more cases in redhat_subscription

(cherry picked from commit f785e9c780)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-12 21:59:09 +01:00
patchback[bot]
4b9ece4fbd [PR #11105/9b886739 backport][stable-12] replace batch of redundant to_native()/to_text() occurrences (#11139)
replace batch of redundant to_native()/to_text() occurrences (#11105)

* replace batch of redundant to_native()/to_text() occurrences

* add changelog frag

(cherry picked from commit 9b8867399e)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-12 21:59:03 +01:00
patchback[bot]
31f0087da9 [PR #11104/4171b8a9 backport][stable-12] replace batch of redundant to_native()/to_text() occurrences (#11138)
replace batch of redundant to_native()/to_text() occurrences (#11104)

* replace batch of redundant to_native()/to_text() occurrences

* add changelog frag

(cherry picked from commit 4171b8a9ab)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-12 21:58:55 +01:00
patchback[bot]
50ae8fd7ae [PR #11102/e5ee3eb8 backport][stable-12] replace batch of redundant to_native() occurrences (#11137)
replace batch of redundant to_native() occurrences (#11102)

* replace batch of redundant to_native() occurrences

* add changelog frag

* Update plugins/modules/idrac_redfish_config.py



* reformat

* Apply suggestions from code review



* Update plugins/modules/dimensiondata_network.py



---------


(cherry picked from commit e5ee3eb88b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-12 21:58:44 +01:00
patchback[bot]
b72e38c909 [PR #11115/58bb1e7c backport][stable-12] fix ruff case B007 (#11131)
fix ruff case B007 (#11115)

* fix ruff case B007

* rollback inventory/iocage

* re-do the fix in inventory/iocage

* add cases in tests/unit/plugins

* rollback plugins/module_utils/memset.py

* rollback extraneous changes in plugins/modules/xcc_redfish_command.py

* add changelog frag

(cherry picked from commit 58bb1e7c04)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-12 21:58:36 +01:00
patchback[bot]
42997e2d28 [PR #11135/ec091060 backport][stable-12] ruff: remove ignore entry B904 (raise without from inside except) (#11136)
ruff: remove ignore entry B904 (raise without from inside except) (#11135)

Remove ignore entry.

(cherry picked from commit ec091060d7)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-12 21:38:43 +01:00
patchback[bot]
6df72406c5 [PR #11122/2dfb46a4 backport][stable-12] remove ignore lines for Python 2 (#11134)
remove ignore lines for Python 2 (#11122)

* remove ignore lines for Python 2

* use yield from

* add changelog frag

* Update changelogs/fragments/11122-yield-from-ignore.yml



---------


(cherry picked from commit 2dfb46a4a6)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-12 21:36:26 +01:00
patchback[bot]
93d23cfef6 [PR #11119/41923e43 backport][stable-12] fix ruff case SIM103 (#11132)
fix ruff case SIM103 (#11119)

* fix ruff case SIM103

* add changelog frag

(cherry picked from commit 41923e43bd)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-12 21:32:10 +01:00
patchback[bot]
ac6c6df2c7 [PR #11121/c45fba54 backport][stable-12] fix ruff case E721 (#11133)
fix ruff case E721 (#11121)

* fix ruff case E721

* add changelog frag

(cherry picked from commit c45fba549f)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-12 21:31:22 +01:00
patchback[bot]
1d28e48d85 [PR #11097/40aea793 backport][stable-12] Use raise from in modules (#11130)
Use raise from in modules (#11097)

* Use raise from.

* Add changelog fragment.

* Add comment.

(cherry picked from commit 40aea793ee)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-12 21:31:11 +01:00
patchback[bot]
cc93dab0fd [PR #11095/2b4333a0 backport][stable-12] Use raise from in plugins (#11129)
Use raise from in plugins (#11095)

* Use raise from.

* Add changelog fragment.

(cherry picked from commit 2b4333a033)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-12 21:00:39 +01:00
patchback[bot]
cddb570e0e [PR #11123/1a82e93c backport][stable-12] Re-enable Copr integration tests (#11126)
Re-enable Copr integration tests (#11123)

Fixes: https://github.com/ansible-collections/community.general/issues/10987
(cherry picked from commit 1a82e93c6d)

Co-authored-by: Maxwell G <maxwell@gtmx.me>
2025-11-12 19:19:56 +01:00
patchback[bot]
c573891160 [PR #11045/6f11d750 backport][stable-12] Use Cobbler API version format to check version (#11117)
Use Cobbler API version format to check version (#11045)

* Use Cobbler API version format to check version

Cobbler use the formula below to return the version:

float(format(int(elems[0]) + 0.1 * int(elems[1]) + 0.001 * int(elems[2]), '.3f'))

Which means that 3.3.7 is changed to 3.307 which is > 3.4.

* Compare Cobbler version as a float

* Remove LooseVersion import

(cherry picked from commit 6f11d75047)

Co-authored-by: Bruno Travouillon <devel@travouillon.fr>
2025-11-12 06:54:04 +01:00
patchback[bot]
6481c4edfa [PR #11111/62492fe7 backport][stable-12] Add ignore.txt entries for bad-return-value-key (#11113)
Add ignore.txt entries for bad-return-value-key (#11111)

Add ignore.txt entries.

(cherry picked from commit 62492fe742)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-12 06:39:27 +01:00
patchback[bot]
768f16e9c4 [PR #11093/3b700f09 backport][stable-12] yum_versionlock: remove to_native() around command output (#11101)
yum_versionlock: remove to_native() around command output (#11093)

* yum_versionlock: remove redundant use of to_native() around command output

* reformat

* add changelog frag

(cherry picked from commit 3b700f0998)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-11 07:23:19 +01:00
patchback[bot]
d487734fea [PR #11098/634be713 backport][stable-12] replace batch of redundant to_native() occurrences (#11100)
replace batch of redundant to_native() occurrences (#11098)

* replace batch of redundant to_native() occurrences

* add changelog frag

(cherry picked from commit 634be713bb)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-11 07:23:07 +01:00
patchback[bot]
5be39ee0c3 [PR #11089/c26a4e61 backport][stable-12] consul_kv: adjust RV in docs (#11092)
consul_kv: adjust RV in docs (#11089)

(cherry picked from commit c26a4e613b)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-11 06:09:19 +01:00
patchback[bot]
9db4aad986 [PR #11078/dcb580c4 backport][stable-12] discard Python 2 ssl handling (#11086)
discard Python 2 ssl handling (#11078)

* discard Python 2 ssl handling

* add changelog frag

* Apply suggestion



---------


(cherry picked from commit dcb580c41d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-10 21:50:40 +01:00
Felix Fontein
b593c673b1 The next release will be 12.0.2 or 12.1.0. 2025-11-10 21:50:32 +01:00
Felix Fontein
9c143467f8 Release 12.0.1. 2025-11-10 21:01:43 +01:00
patchback[bot]
d13cc08efa [PR #11081/e8bdf466 backport][stable-12] Migrate 1 RTD URLs to docs.ansible.com (#11082)
Migrate 1 RTD URLs to docs.ansible.com (#11081)

Migrate RTD URLs to docs.ansible.com

Updated 1 ansible.readthedocs.io URLs to docs.ansible.com equivalents
as part of the Read the Docs migration.

🤖 Generated with Claude Code
https://claude.ai/code


(cherry picked from commit e8bdf46627)

Co-authored-by: John Barker <john@johnrbarker.com>
Co-authored-by: Claude <noreply@anthropic.com>
2025-11-10 20:35:22 +01:00
patchback[bot]
1559a378b1 [PR #11076/8b1d725f backport][stable-12] irc: use True instead of 1 (#11084)
irc: use True instead of 1 (#11076)

* irc: use True instead of 1

* add changelog frag

(cherry picked from commit 8b1d725fb2)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-10 20:34:57 +01:00
patchback[bot]
1951f70c00 [PR #11072/b5157b68 backport][stable-12] opendj_backendprop: use check_rc (#11077)
opendj_backendprop: use check_rc (#11072)

* opendj_backendprop: use check_rc

* add changelog frag

(cherry picked from commit b5157b68ba)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-10 20:34:42 +01:00
patchback[bot]
a78d0d2263 [PR #11031/3cbe44e2 backport][stable-12] Update TSS lookup plugin documentation and add Delinea Platform authentication examples (#11074)
Update TSS lookup plugin documentation and add Delinea Platform authentication examples (#11031)

* - Update documentation from Thycotic to Delinea branding
- Add comprehensive Platform authentication examples
- Enhance existing examples with clearer task names
- Improve RETURN section documentation
- Fix AccessTokenAuthorizer initialization with base_url parameter
- Add support for both Secret Server and Platform authentication methods

* Fixed lintitng issue and added changelog fragment file.

* Removed documentation changes from changelog file.

(cherry picked from commit 3cbe44e269)

Co-authored-by: delinea-sagar <131447653+delinea-sagar@users.noreply.github.com>
2025-11-10 06:47:46 +01:00
patchback[bot]
692f5f603c [PR #11071/60828e82 backport][stable-12] smartos imgadm man page reference (#11075)
smartos imgadm man page reference (#11071)

(cherry picked from commit 60828e82a4)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-10 06:47:37 +01:00
Felix Fontein
ded373a0e7 Prepare 12.0.1. 2025-11-09 21:34:14 +01:00
patchback[bot]
aeded21682 [PR #11052/0175d75a backport][stable-12] dnsimple_info: minor improvements (#11068)
dnsimple_info: minor improvements (#11052)

* dnsimple_info: minor improvements

* add changelog frag

* typo

* Update plugins/modules/dnsimple_info.py



---------


(cherry picked from commit 0175d75a7c)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-09 17:19:40 +00:00
patchback[bot]
bb926e462f [PR #11066/5ea1dee3 backport][stable-12] oneview: remove superfluous parts from unit test (#11067)
oneview: remove superfluous parts from unit test (#11066)

Remove superfluous parts from unit test.

(cherry picked from commit 5ea1dee3ea)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-09 10:10:11 +01:00
patchback[bot]
8cd80d94a0 [PR #11049/396f467b backport][stable-12] Improve Python code: address unused variables (#11058)
Improve Python code: address unused variables (#11049)

* Address F841 (unused variable).

* Reformat.

* Add changelog fragment.

* More cleanup.

* Remove trailing whitespace.

* Readd removed code as a comment with TODO.

(cherry picked from commit 396f467bbb)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-09 09:59:42 +01:00
patchback[bot]
1eca76969a [PR #11055/a9a4f890 backport][stable-12] remove required=false from docs (#11065)
remove required=false from docs (#11055)

(cherry picked from commit a9a4f89033)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-09 09:59:07 +01:00
patchback[bot]
b732bd5b9e [PR #11054/49c7253f backport][stable-12] zfs_facts: use check_rc (#11059)
zfs_facts: use check_rc (#11054)

* zfs_facts: use check_rc

* add changelog frag

(cherry picked from commit 49c7253f24)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-09 09:58:24 +01:00
patchback[bot]
473e1f92e2 [PR #11053/ac4f657d backport][stable-12] opendj_backendprop: docs improvements (#11060)
opendj_backendprop: docs improvements (#11053)

(cherry picked from commit ac4f657d43)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-09 09:58:15 +01:00
patchback[bot]
e304709d8e [PR #11057/0d8521c7 backport][stable-12] supervisorctl: investigate integration tests (#11062)
supervisorctl: investigate integration tests (#11057)

* supervisorctl: investigate integration tests

* wait for supervisord to complete stop

* adjust in module

(cherry picked from commit 0d8521c718)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-09 09:58:07 +01:00
patchback[bot]
caebf65948 [PR #11048/ebf45260 backport][stable-12] remove conditional code for old snakes (#11050)
remove conditional code for old snakes (#11048)

* remove conditional code for old snakes

* remove conditional code for old snakes

* reformat

* add changelog frag

(cherry picked from commit ebf45260ce)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-08 20:55:09 +01:00
patchback[bot]
16f1d07509 [PR #11043/3478863e backport][stable-12] Address issues reported by ruff check (#11047)
Address issues reported by ruff check (#11043)

* Resolve E713 and E714 (not in/is tests).

* Address UP018 (unnecessary str call).

* UP045 requires Python 3.10+.

* Address UP007 (X | Y for type annotations).

* Address UP035 (import Callable from collections.abc).

* Address UP006 (t.Dict -> dict).

* Address UP009 (UTF-8 encoding comment).

* Address UP034 (extraneous parantheses).

* Address SIM910 (dict.get() with None default).

* Address F401 (unused import).

* Address UP020 (use builtin open).

* Address B009 and B010 (getattr/setattr with constant name).

* Address SIM300 (Yoda conditions).

* UP029 isn't in use anyway.

* Address FLY002 (static join).

* Address B034 (re.sub positional args).

* Address B020 (loop variable overrides input).

* Address B017 (assert raise Exception).

* Address SIM211 (if expression with false/true).

* Address SIM113 (enumerate for loop).

* Address UP036 (sys.version_info checks).

* Remove unnecessary UP039.

* Address SIM201 (not ==).

* Address SIM212 (if expr with twisted arms).

* Add changelog fragment.

* Reformat.

(cherry picked from commit 3478863ef0)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-08 09:49:52 +01:00
patchback[bot]
11b802372b [PR #11033/f5943201 backport][stable-12] filesystem: xfs resize: minimal required increment (#11041)
filesystem: xfs resize: minimal required increment (#11033)

Internally XFS uses allocation groups. Allocation groups have a maximum
size of 1 TiB - 1 block. For devices >= 4 TiB XFS uses max size
allocation groups. If a filesystem is extended and the last allocation
group is already at max size, a new allocation group is added. An
allocation group seems to require at least 64 4 KiB blocks.

For devices with integer TiB size (>4), this creates a filesystem that
has initially has 1 unused block per TiB size. The `resize` option
detects this unused space, and tries to resize the filesystem.  The
xfs_growfs call is successful (exit 0), but does not increase the file
system size. This is detected as repeated change in the task.

Test case:
```
- hosts: localhost
  tasks:
    - ansible.builtin.command:
        cmd: truncate -s 4T /media/xfs.img
        creates: /media/xfs.img
      notify: loopdev xfs

    - ansible.builtin.meta: flush_handlers

    - name: pickup xfs.img resize
      ansible.builtin.command:
        cmd: losetup -c /dev/loop0
      changed_when: false

    - community.general.filesystem:
        dev: "/dev/loop0"
        fstype: "xfs"

    - ansible.posix.mount:
        src: "/dev/loop0"
        fstype: "xfs"
        path: "/media/xfs"
        state: "mounted"

    # always shows a diff even for newly created filesystems
    - community.general.filesystem:
        dev: "/dev/loop0"
        fstype: "xfs"
        resizefs: true

  handlers:
    - name: loopdev xfs
      ansible.builtin.command:
        cmd: losetup /dev/loop0 /media/xfs.img
```

NB: If the last allocation group is not yet at max size, the filesystem
can be resized. Detecting this requires considering the XFS topology.
Other filesystems (at least ext4) also seem to require a minimum
increment after the initial device size, but seem to use the entire
device after initial creation.

Fun observation: creating a 64(+) TiB filesystem leaves a 64(+) block
gap at the end, that is allocated in a subsequent xfs_growfs call.


(cherry picked from commit f5943201b9)

Co-authored-by: jnaab <25617714+jnaab@users.noreply.github.com>
Co-authored-by: Johannes Naab <johannes.naab@hetzner-cloud.de>
2025-11-07 21:45:06 +01:00
patchback[bot]
d5b657d872 [PR #11037/c984b896 backport][stable-12] docs style adjustments (#11038)
docs style adjustments (#11037)

docs adjustments

(cherry picked from commit c984b89667)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-07 06:48:36 +01:00
patchback[bot]
855a8504d5 [PR #11029/3c42ec73 backport][stable-12] remove extraneous whitespaces (#11035)
remove extraneous whitespaces (#11029)

* remove extraneous whitespaces

* ruff format

* add changelog frag

(cherry picked from commit 3c42ec730d)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-11-05 22:42:14 +01:00
patchback[bot]
c8ad571e27 [PR #11030/b471a4a9 backport][stable-12] Fix typing failure in CI (#11034)
Fix typing failure in CI (#11030)

* Fix typing failure in CI.

* Add changelog fragment.

(cherry picked from commit b471a4a90d)

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-05 22:26:39 +01:00
Felix Fontein
a22c77ab75 The next release will likely be 12.0.1. 2025-11-03 19:46:05 +01:00
Felix Fontein
6653e84e2e Release 12.0.0. 2025-11-03 18:49:15 +01:00
Felix Fontein
0f6570a5d2 Fix deprecation version. 2025-11-03 18:46:42 +01:00
Felix Fontein
175271e1f3 Remove already executed deprecations. 2025-11-03 18:46:30 +01:00
Felix Fontein
a8d870ba94 Update URLs to stable-12. 2025-11-03 18:42:04 +01:00
mirabilos
f5203aa135 kea_command: new module to access an ISC KEA server (#10709)
kea_command: new module to access an ISC KEA server

This module can be used to access the JSON API of a
KEA DHCP4, DHCP6, DDNS or other services in a generic
way, without having to manually format the JSON, with
response error code checking.

It directly accesses the Unix Domain Socket API so it
needs to execute on the system the server is running,
with superuser privilegues, but without the hassle of
wrapping it into HTTPS and password auth (or client
certificates).

The integration test uses a predefined setup for
convenience, which runs on Debian trixie as well as,
on the CI, Ubuntu noble. It makes assumptions about
the default package configuration and paths and is
therefore tricky to run on other distros/OSes. This
only affects running the KEA server as part of the
tests, not the module.
2025-11-03 17:58:49 +01:00
Felix Fontein
3e9f332b9c CI: remove no longer necessary skip/ lines (#11028)
Remove no longer necessary skip/ lines.
2025-11-03 06:48:40 +01:00
Felix Fontein
1c04218434 Extra docs: generate Ansible outputs with 'antsibull-docs ansible-output' (#10421)
* Generate many Ansible outputs with 'antsibull-docs ansible-output'.

* Generate YAML output as well.

* Check ansible-output from CI instead of updating.

* Use reset-previous-blocks meta action; generate more code blocks.

* Use set-template meta action.

* Run ansible-output in CI if anything in docs/ is changed.

* Remove unnecessary allow_duplicate_keys.
2025-11-03 06:48:32 +01:00
Felix Fontein
38d1b47115 Mention code formatting in contribution guide (#11025)
Mention code formatting in contribution guide.
2025-11-03 06:13:35 +01:00
A
e8c482d78e Keycloak_realm: Add admin permissions enabled bool (#11002)
* Keycloak_realm: add admin permissions enabled bool

* Update plugins/modules/keycloak_realm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/keycloak_realm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Add Keycloak-add-admin-permissions fragment

* Update changelogs/fragments/11002-keycloak-add-admin-permissions-enabled.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-03 06:12:27 +01:00
Pär Karlsson
6635bd7742 Add "changed_deps" to portage parameters (#11023)
* Add option for '--changed-deps'

* Add changelog fragment

* Re-add the changed_deps option

* Include link to PR

* Rename fragment properly, and include PR number in name

* Add version string and improve doc description

* Update changelogs/fragments/11023-portage_changed_deps.yml

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Refine documentation string further

* Reformat with ruff

* Add a correct changely fragment

* Update plugins/modules/portage.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-11-03 06:12:09 +01:00
Alexei Znamensky
b28ac655fc xfconf: fix existing empty array case (#11026)
* xfconf: fix existing empty array case

* fix xfconf_info as well

* add changelog frag
2025-11-02 20:20:31 +01:00
Felix Fontein
9a7a316e24 Prepare 12.0.0. 2025-11-02 14:44:50 +01:00
Felix Fontein
09d8b2bb77 Adjust CI schedules: remove stable-9, move stable-10 to weekly. 2025-11-02 14:04:32 +01:00
Felix Fontein
3f7c4a261e Extend description. 2025-11-02 12:05:04 +01:00
Felix Fontein
64976c9d1a Add new PRs (that haven't been merged yet). 2025-11-02 12:05:04 +01:00
Felix Fontein
81b181c76d Mention more PRs that have no changelog. 2025-11-02 12:05:04 +01:00
Felix Fontein
b74ead44ec Some more. 2025-11-02 12:05:04 +01:00
Felix Fontein
07cb4f66c0 Summarize some more. 2025-11-02 12:05:04 +01:00
Felix Fontein
ec81990faa Move all f-string changelog fragments into a single changelog fragment.. 2025-11-02 12:05:04 +01:00
Felix Fontein
7520ffb89f Fix commit hash. 2025-11-01 13:47:36 +01:00
Felix Fontein
31734bb13f Add ruff format config. 2025-11-01 13:46:53 +01:00
Felix Fontein
340ff8586d Reformat everything. 2025-11-01 13:46:53 +01:00
Felix Fontein
3f2213791a Cleanup: use f-strings instead of str.format() (#11017)
Address UP032: use f-strings instead of str.format().
2025-11-01 12:04:33 +01:00
Marius Bertram
5d5392786c Add Keycloak module to send execute-actions email to users (#10950)
* Add Keycloak module to send execute-actions email to users

Signed-off-by: Marius Bertram <marius@brtrm.de>

* Fix Example Typo

Signed-off-by: Marius Bertram <marius@brtrm.de>

* Break if argument_speck() is broken

Signed-off-by: Marius Bertram <marius@brtrm.de>

* Adjust to new tests in main.

* Remove unnecessary version_added.

---------

Signed-off-by: Marius Bertram <marius@brtrm.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-30 20:55:31 +01:00
mirabilos
eb6337c0c9 omapi_host: fix bytes vs. str confusion (#11001)
* omapi_host: fix bytes vs. str confusion

After an update of the control node from Debian
bookworm to trixie, the omapi_host module fails to
work with the error message:

Key of type 'bytes' is not JSON serializable by the
'module_legacy_m2c' profile.

https://github.com/ansible/ansible/issues/85937 had the
same error, but the fix is a bit more intricate here
because the result dict is dynamically generated from
an API response object.

This also fixes unpacking the MAC and IP address and
hardware type, which were broken for Python3.

* Merge suggestion for changelog fragment

Co-authored-by: Felix Fontein <felix@fontein.de>

* do not unpack_ip twice

Noticed by Felix Fontein <felix@fontein.de>

* mention py3k in changelog fragment, too

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-30 20:19:06 +01:00
Felix Fontein
74c2c804e5 Cleanup: use super() instead of super(__class__, self) (#11016)
* Address UP008: Use super() instead of super(__class__, self).

* Linting.
2025-10-30 20:17:26 +01:00
Felix Fontein
0c5466de47 Cleanup: remove unicode prefix, remove explicit inheritance from object (#11015)
* Address UP025: remove unicode literals from strings.

* Address UP004: class inherits from 'object'.
2025-10-30 20:17:10 +01:00
Felix Fontein
f61847b116 Configure 'ruff check' in CI (#10998)
Configure ruff check in CI.
2025-10-29 17:14:43 +00:00
Felix Fontein
6088b0cff5 CI: add type checking (#10997)
* Set up type checking with mypy.

* Make mypy pass.

* Use list() instead of sorted().
2025-10-29 17:13:38 +00:00
Alexey Langer
831787619a Add exclude option to filetree module (#10966)
* Added excludes option to filetree module

* Renamed option 'excludes' to 'exclude'

* Corrected issue and PR links

Co-authored-by: Felix Fontein <felix@fontein.de>

* Added version for documentation

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fixed example of using exclude option

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fixed regular expression in example

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-29 17:46:26 +01:00
David Jenkins
e84f59a62d fix(pritunl_user): improve resilience to null or missing user parameters (#10955)
* fix(pritunl_user): improve resilience to null or missing user parameters

* added changelog fragment - 10955

* standardize 10955 changelog fragment content

Co-authored-by: Felix Fontein <felix@fontein.de>

* simplify user params comparison

Co-authored-by: Felix Fontein <felix@fontein.de>

* simplify list fetch

Co-authored-by: Felix Fontein <felix@fontein.de>

* simplify remote value retrieval

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: djenkins <djenkins@twosix.net>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-29 17:45:29 +01:00
Alexei Znamensky
7e8e8948a3 npm: improve parameter validation (#10983)
* npm: improve parameter validation

* add changelog frag

* add required_if clause

* fix required_if, add required_one_of, add docs

* Update plugins/modules/npm.py

* Update plugins/modules/npm.py

* Update plugins/modules/npm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/npm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-29 17:42:51 +01:00
Felix Fontein
54af64ad36 keycloak_user: mark credentials[].value as no_log=True (#11005)
Mark credentials[].value as no_log=True.
2025-10-29 17:42:29 +01:00
Matthew
ce0d06b306 onepassword: extend CLI class initialization with additional parameters (#10965)
* onepassword: extend CLI class initialization with additional parameters

* add changelog fragment 10965-onepassword-bugfix.yml

* Update changelogs/fragments/10965-onepassword-bugfix.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-28 21:25:45 +01:00
Stéphane Graber
a1bf2fc44a Add Incus inventory plugin (#10972)
* BOTMETA: Add Incus inventory plugin

Signed-off-by: Stéphane Graber <stgraber@stgraber.org>

* plugins/inventory: Implement basic Incus support

This is a simple inventory plugin leveraging the local `incus` command
line tool. It supports accessing multiple remotes and projects, builds a
simple group hierarchy based on the remotes and projects and exposes
most properties as variable. It also supports basic filtering using the
server-side filtering syntax supported by the Incus CLI.

Signed-off-by: Stéphane Graber <stgraber@stgraber.org>

* plugins/inventory/incus: Add support for constructable groups

This allows the use of constructable groups and also allows disabling
the default group structure.

Signed-off-by: Stéphane Graber <stgraber@stgraber.org>

* plugins/inventory/incus: Add unit tests

Signed-off-by: Stéphane Graber <stgraber@stgraber.org>

---------

Signed-off-by: Stéphane Graber <stgraber@stgraber.org>
2025-10-28 21:24:09 +01:00
nbragin4
af8c4fb95e terraform: Fix bug when None values aren't processed correctly (#10961)
* terraform: Fix bug when None values aren't processed correctly

Just found that i can't pass null values as complex variables into terraform using this module, while i can do that with terraform itself. Fixed undesired behavior.

* chore: changelog fragment 10961-terraform-complexvars-null-bugfix.yaml

* Update changelogs/fragments/10961-terraform-complexvars-null-bugfix.yaml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/terraform.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/terraform.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fix condition to check for None type in terraform.py

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-28 20:41:04 +01:00
Alexei Znamensky
c889a4cb6d deprecate oneandone modules (#10994)
* deprecate oneandone modules

* add mod util to runtime.yml

* add changelog frag

* change deprecation version to 13.0.0

* change deprecation version to 13.0.0 in readme.yml as well

* Update changelogs/fragments/10994-oneandone-deprecation.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-29 08:13:55 +13:00
Alexei Znamensky
6829a064a5 dimensiondata: deprecation (#10986)
* dimensiondata: deprecation

* add changelog frag

* typo

* Update changelogs/fragments/10986-deprecation-dimensiondata.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-28 10:50:56 +13:00
Alexei Znamensky
efad7a0d38 unit tests: use f-strings (#10993) 2025-10-27 12:32:33 +13:00
Alexei Znamensky
e177d1e61a unit tests (modules): use f-strings (#10992)
* unit tests (modules): use f-strings

* Apply suggestions from code review
2025-10-27 11:08:33 +13:00
Felix Fontein
f6781f654e CI: temporarily disable tests for copr (#10988)
Temporarily disable tests for copr.
2025-10-26 21:48:20 +01:00
Alexei Znamensky
adcc683da7 modules [t-z]*: use f-strings (#10978)
* modules [t-z]*: use f-strings

* add changelog frag

* remove extraneous file
2025-10-26 22:36:03 +13:00
Alexei Znamensky
af246f8de3 modules s[f-z]*: use f-strings (#10977)
* modules s[f-z]*: use f-strings

* add changelog frag
2025-10-26 22:35:30 +13:00
Alexei Znamensky
73452acf84 modules s[a-e]*: use f-strings (#10976)
* modules s[a-e]*: use f-strings

* add changelog frag
2025-10-26 22:34:24 +13:00
Alexei Znamensky
32dd5f04c5 uthelper: make str and repr generic in base class (#10985)
* uthelper: make str and repr generic in base class

* Update tests/unit/plugins/modules/uthelper.py
2025-10-26 09:40:47 +01:00
Alexei Znamensky
cf663c6e95 dnf_versionlock: docs and comments for Python < 3.6 (#10984) 2025-10-26 21:23:58 +13:00
Alexei Znamensky
39291d3c60 keyring & keyring_info: import shlex directly (#10981)
* keyring & keyring_info: import shlex directly

* add changelog frag
2025-10-26 08:02:03 +01:00
Alexei Znamensky
fa662c0f1a mqtt: remove code for unsupported Python versions (#10980)
* mqtt: remove code for unsupported Python versions

* add changelog frag
2025-10-26 08:01:52 +01:00
Alexei Znamensky
032d398c0a module utils: use f-strings (#10979)
* module utils: use f-strings

* add changelog frag
2025-10-26 08:01:38 +01:00
Alexei Znamensky
b527e80307 modules [lm]*: use f-strings (#10971)
* modules [lm]*: use f-strings

* add changelog frag
2025-10-26 07:57:24 +01:00
Alexei Znamensky
4a6a449fbd modules [jk]*: use f-strings (#10970)
* modules [jk]*: use f-strings

* add changelog frag

* Apply suggestions from code review

* typing insanity
2025-10-26 07:54:15 +01:00
Alexei Znamensky
8120e9347e modules p*: use f-strings (#10974)
* modules p*: use f-strings

* add changelog frag
2025-10-26 07:48:51 +01:00
Alexei Znamensky
d51e4c188b modules r*: use f-strings (#10975)
* modules r*: use f-strings

* add changelog frag

* Apply suggestions from code review
2025-10-26 07:48:33 +01:00
Alexei Znamensky
749c06cd01 modules [no]*: use f-strings (#10973)
* modules [no]*: use f-strings

* add changelog frag
2025-10-26 07:48:10 +01:00
Alexei Znamensky
50846b7560 modules i[^p]*: use f-strings (#10969)
* remove extraneous to_native()

* add changelog frag

* Apply suggestions from code review
2025-10-25 13:41:49 +02:00
Alexei Znamensky
0b6e99b28b modules ip*: use f-strings (#10968)
* modules ip*: use f-strings

* add changelog frag
2025-10-25 02:54:37 +02:00
Alexei Znamensky
0ef2235929 modules bc*: use f-strings (#10945)
* modules bc*: use f-strings

* no quotes or backticks inside f-strs

* add changelog frag

* rename chglof frag file

* rename chglof frag file

* copr: re-applied change maintain original logic
2025-10-25 01:45:40 +02:00
Alexei Znamensky
f9b4abf930 modules h*: use f-strings (#10959)
* modules h*: use f-strings

* add changelog frag
2025-10-25 00:59:12 +02:00
Alexei Znamensky
b67e7c83cf modules g*: use f-strings (#10958)
* modules g*: use f-strings

* add changelog frag

* remove extraneous to_native()
2025-10-25 00:54:38 +02:00
Alexei Znamensky
a3987c9844 modules def*: use f-strings (#10947)
* modules def*: use f-strings

* remove !s from f-strings

* add changelog frag
2025-10-23 22:12:10 +02:00
Stanislav Shamilov
258e65f5fc keycloak_user_rolemapping: docs fixes and examples about mapping realm roles in keycloak_user_rolemapping (#10953)
* Fix docs and add examples about mapping realm roles for keycloak_user_rolemapping.py module (#7149)

* fix sanity tests
2025-10-23 21:25:26 +02:00
Alexei Znamensky
4c7be8f268 cloudflare_dns: rollback validation for CAA records (#10956)
* cloudflare_dns: rollback validation for CAA records

* add changelog frag
2025-10-23 06:52:34 +02:00
Alexei Znamensky
d86340b9d3 modules a*: use f-strings (#10942)
* modules a*: use f-strings

* add changelog frag

* add changelog frag

* rename chglof frag file
2025-10-23 06:50:32 +02:00
Alexei Znamensky
0feabaa7da keycloak: use f-strings (#10941)
* keycloak: use f-strings

* remove nested f-str

* add changelog frag
2025-10-22 23:17:06 +02:00
Alexei Znamensky
728856f611 redfish_utils module utils: use f-strings (#10939)
* redfish_utils: use f-strings

* add changelog frag

* remove nested f-str
2025-10-22 23:15:41 +02:00
Christer Warén
66578d0b2c ipa_host: add userclass and locality parameters (#10935)
* ipa_host: add userclass and locality parameters

* Changelog Fragment - 10935

* Update plugins/modules/ipa_host.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/ipa_host.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/ipa_host.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/10935-ipa-host-add-parameters.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-22 23:14:55 +02:00
Alexei Znamensky
7572b46c7b filesystem: docs adjustments (#10948) 2025-10-21 06:25:43 +02:00
Marius Bertram
c850e209ab Add support for client auth in Keycloak cllient secrets module (#10933)
* keycloak: add client authentication support for client_secret

Signed-off-by: Marius Bertram <marius@brtrm.de>

* readd ['token', 'auth_realm']

Signed-off-by: Marius Bertram <marius@brtrm.de>

---------

Signed-off-by: Marius Bertram <marius@brtrm.de>
2025-10-19 21:12:41 +02:00
Alexei Znamensky
d4dfc217d8 xenserver: use f-strings (#10940)
* xenserver: use f-strings

* add changelog frag
2025-10-19 17:40:28 +02:00
carlfriedrich
7e666a9c31 fix(modules/gitlab_runner): Fix exception in check mode on new runners (#10918)
* fix(modules/gitlab_runner): Fix exception in check mode on new runners

When a new runner is added in check mode, the role used to throw an
exception. Fix this by returning a valid runner object instead of a
boolean.

Fixes #8854

* docs: Add changelog fragment
2025-10-19 08:54:21 +02:00
Alexei Znamensky
2bd44584d3 cloudflare_dns: rollback validation for SRV records (#10937)
* cloudflare_dns: rollback validation for SRV records

* add changelog frag
2025-10-18 09:43:46 +02:00
Felix Fontein
9dedd77459 Add __init__.py to work around ansible-test/pylint bug (#10926)
Add __init__.py to work around ansible-test/pylint bug.
2025-10-15 21:42:55 +02:00
Felix Fontein
8472dc22ea Add stable-2.20 to CI, bump version of devel branch (#10923)
Add stable-2.20 to CI, bump version of devel branch.
2025-10-15 08:41:04 +02:00
Alexei Znamensky
3b83df3f79 modules: update code to python3 (#10904)
* modules: update code to python3

* pamd: rollback changes

* add changelog frag

* fix/improve assignments using generators

* Update plugins/modules/launchd.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-13 21:42:48 +02:00
dependabot[bot]
c5253c5007 build(deps): bump github/codeql-action from 3 to 4 in the ci group (#10914)
Bumps the ci group with 1 update: [github/codeql-action](https://github.com/github/codeql-action).


Updates `github/codeql-action` from 3 to 4
- [Release notes](https://github.com/github/codeql-action/releases)
- [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md)
- [Commits](https://github.com/github/codeql-action/compare/v3...v4)

---
updated-dependencies:
- dependency-name: github/codeql-action
  dependency-version: '4'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-10-13 08:24:07 +02:00
Alexei Znamensky
07cfd6c4b4 update code to python3 (#10903)
* update code to python3

* add changelog frag

* rollback adjustment for plugins/lookup/lmdb_kv.py

* accept PR suggestion for plugins/module_utils/utm_utils.py

* accept PR suggestion for plugins/module_utils/vexata.py

* Apply suggestions from code review

* Update changelogs/fragments/10903-2to3.yml

* Update changelogs/fragments/10903-2to3.yml
2025-10-12 11:05:57 +02:00
Alexei Znamensky
056633efaa launchd: remove conditional code for Python < 3.4 (#10909)
* launchd: remove conditional code for Python < 3.4

* add changelog frag
2025-10-12 08:56:05 +02:00
Felix Fontein
21122e926b Remove Python 2 specific parts from integration tests (#10897)
* Remove Python 2 specific parts from integration tests.

* Remove more constraints.
2025-10-12 08:48:50 +02:00
Alexei Znamensky
10bdd9c56b tests/unit/plugins/modules/test_composer.yaml: remove redundant lines (#10910) 2025-10-12 17:36:50 +13:00
Alexei Znamensky
85053728ce linode module utils: update import to recent Ansible level (#10906)
* linode module utils: update import to recent Ansible level

* add changelog frag
2025-10-11 13:42:19 +02:00
Alexei Znamensky
cc83188594 module utils: update code to python3 (#10907)
* module utils: update code to python3

* add changelog frag
2025-10-11 13:42:11 +02:00
Alexei Znamensky
ce544f370c archive: lzma is standard in Python 3.7+ (#10908)
* archive: lzma is standard in Python 3.7+

* add changelog frag
2025-10-11 13:42:01 +02:00
Alexei Znamensky
3734f471c1 use f-strings (#10899)
* use f-strings

* add changelog frag

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-11 11:59:28 +02:00
Alexei Znamensky
8a1ed41fe5 java_keystore: simplify code (#10905)
* java_keystore: simplify code

* add changelog frag
2025-10-11 11:46:04 +02:00
Alexei Znamensky
b85e263466 use f-strings in module utils (#10901)
* use f-strings in module utils

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* remove unused imports

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-11 11:43:43 +02:00
Felix Fontein
74b6a0294a Unit tests: clean up compat imports (#10902)
Clean up compat imports.
2025-10-11 10:03:37 +02:00
Felix Fontein
a8977afb04 Remove all usage of ansible.module_utils.six from main branch (#10888)
* Get rid of all six.moves imports.

* Get rid of iteritems.

* Get rid of *_type(s) aliases.

* Replace StringIO import.

* Get rid of PY2/PY3 constants.

* Get rid of raise_from.

* Get rid of python_2_unicode_compatible.

* Clean up global six imports.

* Remove all usage of ansible.module_utils.six.

* Linting.

* Fix xml module.

* Docs adjustments.
2025-10-11 08:21:57 +02:00
Felix Fontein
8f8a0e1d7c Fix __future__ imports, __metaclass__ = type, and remove explicit UTF-8 encoding statement for Python files (#10886)
* Adjust all __future__ imports:

for i in $(grep -REl "__future__.*absolute_import" plugins/ tests/); do
  sed -e 's/from __future__ import .*/from __future__ import annotations/g' -i $i;
done

* Remove all UTF-8 encoding specifications for Python source files:

for i in $(grep -REl '[-][*]- coding: utf-8 -[*]-' plugins/ tests/); do
  sed -e '/^# -\*- coding: utf-8 -\*-/d' -i $i;
done

* Remove __metaclass__ = type:

for i in $(grep -REl '__metaclass__ = type' plugins/ tests/); do
  sed -e '/^__metaclass__ = type/d' -i $i;
done
2025-10-10 19:52:04 +02:00
Alexei Znamensky
633bd6133a remove Python2 some constructs/docs/comments (#10892)
* remove Python2 some constructs/docs/comments

* add changelog frag
2025-10-10 19:15:01 +02:00
Alexei Znamensky
5f471b8e5b refactor dict from literal list (#10891)
* refactor dict from literal list

* add changelog frag
2025-10-10 19:09:10 +02:00
Thomas Sjögren
14a858fd9c random_string: replace random.SystemRandom() with secrets.SystemRandom() (#10893)
* random_string: replace random.SystemRandom() with secrets.SystemRandom()

Signed-off-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>

* add the forgotten blank line

Signed-off-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>

* Update changelogs/fragments/replace-random-with-secrets.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* readd the description

Signed-off-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>

* Update changelogs/fragments/replace-random-with-secrets.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Signed-off-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-10-10 19:08:16 +02:00
Felix Fontein
68b8345199 pacman: link to yay bug report (#10887)
Link to yay bug report.
2025-10-10 18:14:38 +13:00
Felix Fontein
04e720f2e4 Drop support for ansible-core 2.16, and thus for Python < 3.7 (#10884)
Drop support for ansible-core 2.16, and thus for Python < 3.7.
2025-10-09 18:31:05 +02:00
Felix Fontein
0b72737cab Bump version of main to 12.0.0; execute announced deprecations (#10883)
* Bump version to 12.0.0.

* Remove deprecated modules and plugins.

* state is now required.

* Change default of prepend_hash from auto to never.

* Remove support for force=''.

* Always delegate 'debug'.

* Remove ignore_value_none and ctx_ignore_none parameters.

* Remove parameters on_success and on_failure.

* Update BOTMETA.

* Adjust docs reference.

* Forgot required=True.

* Fix changelog fragment.

* Adjust unit tests.

* Fix changelog.

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

---------

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-10-09 13:50:07 +02:00
desand01
f34842b7b2 Keycloak client scope support (#10842)
* first commit

* sanity

* fixe test

* trailing white space

* sanity

* Fragment

* test sanity

* Update changelogs/fragments/10842-keycloak-client-scope-support.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/keycloak_client.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* add client_scopes_behavior

* Sanity

* Sanity

* Update plugins/modules/keycloak_client.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fix typo.

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Update plugins/modules/keycloak_client.py

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Update plugins/modules/keycloak_client.py

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Update plugins/modules/keycloak_client.py

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Update plugins/modules/keycloak_client.py

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

---------

Co-authored-by: Andre Desrosiers <andre.desrosiers@ssss.gouv.qc.ca>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-10-06 18:16:27 +02:00
Chris
30894f4144 github_app_access_token: add support for GitHub Enterprise Server (#10880)
* github_app_access_token: add support for GitHub Enterprise Server (#10879)
Add option to specify api endpoint for a GitHub Enterprise Server.
If option is not specified, defaults to https://api.github.com.

* refactor: apply changes as suggested by felixfontein

* docs: fix nox check error and type-o

nox check: plugins/lookup/github_app_access_token.py:57:1: DOCUMENTATION: error: too many blank lines (1 > 0)  (empty-lines)

* refactor: apply changes as suggested by russoz

* refactor: apply changes as suggested by felixfontein
2025-10-06 18:14:24 +02:00
Giorgos Drosos
cc41d9da60 gem: fix soundness issue when uninstalling default gems on Ubuntu (#10689)
* Attempt to fix gem soundness issue

* Return command execution

* Fix value error

* Attempt to fix failling tests

* Fix minor issues

* Update changelog

* Update tests/integration/targets/gem/tasks/main.yml

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Update changelogs/fragments/10689-gem-prevent-soundness-issue.yml

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Remove state and name from gem error message

* Improve gem uninstall check

* Make unit tests pass

* Fix linting issues

* gem: Remove length chenck and adapt unit tests

* Adapt gem unit tests

* gem: improve error msg

* Fix sanity error

* Fix linting issue

---------

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-10-05 07:15:25 +02:00
Alexei Znamensky
750adb431a pipx: adjustments for pipx 1.8.0 (#10874)
* pipx: adjustments for pipx 1.8.0

* add changelog frag

* typo
2025-10-05 07:06:01 +02:00
Felix Fontein
6cd4665412 Avoid six in plugin code (#10873)
Avoid six in plugin code.
2025-10-05 06:56:32 +02:00
Sebastian Damm
9d0150b2c3 [doc] update requirements for all consul modules/lookups (#10863)
* [doc] update requirements for consul_kv module

python-consul has been unmaintained for a while. It uses a legacy way of passing the Consul token when sending requests. This leads to warning messages in Consul log, and will eventually break communication. Using the maintained py-consul library ensures compatibility to newer Consul versions.

* [doc] replace all python-consul occurrences with py-consul

* [fix] tests and possible pip server errors

* [chore] remove referencce to python-consul in comment

---------

Co-authored-by: Sebastian Damm <sebastian.damm@pascom.net>
2025-10-03 07:09:20 +02:00
Pierre Riteau
41b65161bd Fix typos: s/the the/the/ (#10867) 2025-09-30 21:17:01 +02:00
Alexei Znamensky
4b644ae41b docs: fix sphinx warnings in uthelper guide (#10864) 2025-09-28 20:30:54 +13:00
Felix Fontein
e9b1788bb9 Add repository configuration to antsibull-nox.toml. 2025-09-26 07:03:53 +02:00
Felix Fontein
8b5f4b055f Fix RST syntax error (#10861)
Fix RST syntax error.
2025-09-25 21:08:28 +02:00
Felix Fontein
68684a7a4c github_deploy_key: make sure variable exists before use (#10857)
Make sure variable exists before use.
2025-09-25 20:34:50 +02:00
Felix Fontein
648ff7db02 yaml cache plugin: make compatible with ansible-core 2.19 (#10852)
Make compatible with ansible-core 2.19.
2025-09-25 06:57:37 +02:00
X
0f23b9e391 Force Content-type header to application/json if is_pre740 is false (#10832)
* Force Content-type header to application/json if is_pre740 is false

* Remove response variable from fail_json module

* Add a missing blank line to match pep8 requirement

* Add changelog fragment of issue #10796

* Rename fragment section

* Improve fragment readability

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: ludovic <ludovic.petetin@aleph-networks.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-21 20:28:03 +02:00
Jakub Danek
b865bf5751 Fix keycloak sub-group search (#10840)
* fix bug in missing realm argument when searching for groups

* MR change fragment

* 39+1=40
2025-09-21 20:27:42 +02:00
desand01
7c40c6b6b5 Keycloak role fix changed status (#10829)
* Exclude aliases before comparison

* add test

* fragment

* Update changelogs/fragments/10829-fix-keycloak-role-changed-status.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Andre Desrosiers <andre.desrosiers@ssss.gouv.qc.ca>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-18 21:56:39 +02:00
Felix Fontein
2bf8ae88be timezone: mention that Debian 13 also needs util-linux-extra (#10830)
Mention that Debian 13 also needs util-linux-extra.
2025-09-18 21:56:22 +02:00
David Phillips
7a231a248e gitlab_*_variable: add description option (#10812) 2025-09-18 21:55:28 +02:00
brad2014
833e6e36de homebrew: Support old_tokens and oldnames in homebrew package data (#10805)
* homebrew: Support old_tokens and oldnames in homebrew package data

Fixes #10804

Since brew info will accept old_tokens (for casks) and oldnames (for formulae) when provided by the homebrew module "name" argument, the module also needs to consider thes old names as valid for the given package.  This commit updates _extract_package_name to do that.

All existing package name tests, including existing tests for name aliases and tap prefixing, have been consolidated with new name tests into package_names.yml.

* Added changelog fragment.

* homebrew: replace non-py2 compliant f-string usage

* code formatting lint, and py2 compatibility fixes

* homebrew: added licenses to new files, nox lint

* Update plugins/modules/homebrew.py

use str.format() instead of string addition

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update tests/integration/targets/homebrew/tasks/casks.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update tests/integration/targets/homebrew/tasks/package_names_item.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update tests/integration/targets/homebrew/tasks/formulae.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fixes for performance concerns on new homebrew tests.
1) tests for alternate package names are commented out in main.yml.
2) the "install via alternate name, uninstall via base name" test
   case was deemed duplicative, and has been deleted .
3) minor fixes to use jinja2 "~" for string concat instead of "+"

* Fix nox lint

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-15 18:26:01 +02:00
Felix Fontein
c1e877d254 github_app_access_token: fix compatibility import of jwt (#10810)
Fix compatibility import of jwt.
2025-09-13 09:17:16 +02:00
Alexei Znamensky
562d2ae5b1 parted: join command list for fail_json message (#10823)
* parted: join command list for fail_json message

* add changelog frag
2025-09-13 09:17:05 +02:00
Alexei Znamensky
0911db457e pipx: review tests (#10822) 2025-09-13 17:29:01 +12:00
Stanislav Shamilov
d2e2395ae3 Speed up tests in android_sdk module (#10818)
changed the dependency that is used to test the functionality in android_sdk module. The previous dependency was ~100MB, the current one is ~6MB. This should speed up the tests a bit and reduce the traffic.
2025-09-12 19:20:18 +02:00
Abhijeet Kasurde
a7e4cee47d Remove obsolete test conditions (#10813)
* Fedora 31 and 32 are EOL, remove conditions related

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>
2025-09-12 06:24:50 +02:00
David Phillips
f772bcda88 gitlab_protected_branch: refactor, add allow_force_push, code_owner_approval_required (#10795)
* gitlab_protected_branch: fix typo

* gitlab_protected_branch: lump parameters into options dictionary

Hardcoding parameter lists gets repetitive. Refactor this module to use
an options dictionary like many other gitlab_* modules. This makes it
cleaner to add new options.

* gitlab_protected_branch: update when possible

Until now, the module deletes and re-creates the protected branch if any
change is detected. This makes sense for the access level parameters, as
these are not easily mutated after creation.

However, in order to add further options which _can_ easily be updated,
we should support updating by default, unless known-immutable parameters
are changing.

* gitlab_protected_branch: add `allow_force_push` option

* gitlab_protected_branch: add `code_owner_approval_required` option

* gitlab_protected_branch: add issues to changelog

* Update changelog.

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-08 19:02:40 +02:00
Felix Fontein
efb0c487f6 Next expected release will be 11.4.0. 2025-09-08 18:59:15 +02:00
Felix Fontein
062b63bda5 Add filters to_yaml and to_nice_yaml (#10784)
* Add filters to_yaml and to_nice_yaml.

* Allow to redact sensitive values.

* Add basic tests.

* Work around https://github.com/ansible/ansible/issues/85783.

* Cleanup.
2025-09-08 18:48:49 +02:00
David Phillips
3574b3fa93 gitlab_*_variable: support masked-and-hidden variables (#10787)
* gitlab_*_variable: support masked-and-hidden variables

Support masking and hiding GitLab project and group variables. In the
GitLab API, variables that are hidden are also masked by implication.
Note gitlab_instance_variable is unmodified since instance variables
cannot be hidden.

* gitlab_*_variable: add `hidden` to legacy `vars` syntax

* gitlab_*_variable: address review comments in doc
2025-09-08 18:40:35 +02:00
Julian Thanner
cb84a0e99f Add Option to configure webAuthnPolicies for Keycloak (#10791)
* Add Option to configure webAuthnPolicies for Keycloak

* Mark webauth properties as noLog false

* fix line length

* rename webauthn stuff to match api of keycloak

* rename webauthn stuff to match api of keycloak

* Update changelogs/fragments/keycloak-realm-webauthn-policies.yml

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* add version for each type

* Update plugins/modules/keycloak_realm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Julian Thanner <julian.thanner@check24.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-08 18:37:10 +02:00
Dexter
3baa13a3e4 pacemaker_resource: Add cloning support for resources and groups (#10665)
* add clone state for pacemaker_resource

* add changelog fragment

* Additional description entry for comment header

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/pacemaker_resource.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* fix formatting for yamllint

* Apply code review suggestions

* refactor state name to cloned

* Update plugins/modules/pacemaker_resource.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Apply suggestions from code review

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Apply suggestions from code review

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-09-07 21:24:01 +02:00
Alexei Znamensky
d0123a1038 django_dumpdata, django_loaddata: new modules (#10726)
* django module, module_utils: adjustments

* more fixes

* more fixes

* further simplification

* django_dumpdata/django_loaddata: new modules

* Update plugins/modules/django_dumpdata.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* add note about idempotency

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-09-03 21:49:11 +02:00
David Phillips
aed763dae7 gitlab_*_access_token: add missing scopes (#10785)
Over time, GitLab added extra scopes to the API. I'm in here to add
self_rotate, but may as well add all other missing scopes while I'm
here.
2025-09-03 21:40:06 +02:00
Alexei Znamensky
f1f167e3fc dnf_versionlock: minor refactor (#10783)
* dnf_versionlock: minor refactor

* Python 2 does not appreciate clever syntax

* Update plugins/modules/dnf_versionlock.py

* Update plugins/modules/dnf_versionlock.py

* rollback raw patterns adjustment
2025-09-03 21:38:21 +02:00
Felix Fontein
4a70d4091d Deprecate hiera lookup (#10779)
Deprecate hiera lookup.
2025-08-31 16:15:20 +02:00
Felix Fontein
07ce00417d CI: Add Debian 13 Trixie (#10638)
* Add Debian 13 Trixie to CI.

* Add adjustments.

* Disable one apache2_module test for Debian 13.

* Disable ejabberd_user test on Debian 13.

* Fix paramiko install.

* Skip cloud_init_data_facts on Debian 13.

* Fix postgresql setup.

* Fix timezone tests.
2025-08-31 16:15:09 +02:00
Alexei Znamensky
e6502a8e51 xenserver: remove required=false from arg spec (#10769)
* xenserver: remove required=false from arg spec

* add changelog frag
2025-08-31 11:49:09 +02:00
Alexei Znamensky
f6e1d90870 parted: command args as list rather than string (#10642)
* parted: command args as list rather than string

* add changelog frag

* add missing command line dash args

* make scripts as lists as well

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-31 11:48:24 +02:00
Alexei Znamensky
6f40eff632 simplify string formatting in some modules (#10727)
* simplify string formatting in some modules

* add changelog frag
2025-08-31 11:46:43 +02:00
Alexei Znamensky
3cc4f28fd7 minor fixes in doc guides (#10770) 2025-08-31 21:43:36 +12:00
Felix Fontein
b498435066 zpool: fix broken example (#10768)
Fix broken example.
2025-08-31 11:41:57 +02:00
Hoang Nguyen
f6003f61cc selective: don't hard code ansible_loop_var 'item' (#10752)
* selective: don't hard code ansible_loop_var 'item'

* Add changelog fragment

* Update changelog message

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-29 06:32:15 +02:00
Thibault Geoffroy
d6ad9beb58 kdeconfig: add support for kwriteconfig6 (#10751)
* kdeconfig: add support for kwriteconfig6

Rationale:
With a minimal install of KDE Plasma 6, the kdeconfig module would systematically fail with the following error: `kwriteconfig is not installed.`
In this configuration, kwriteconfig6 is the only version of kwriteconfig installed, and the kdeconfig module did not not find it.

Fixes #10746

* Add changelog fragment

* Update changelogs/fragments/10751-kdeconfig-support-kwriteconfig6.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-29 06:25:11 +02:00
Simon Kelly
469e557b95 monit: handle arbitrary error status (#10743)
* handle arbitrary error status

* add changelog fragment

* mock module in test

* Update changelogs/fragments/10743-monit-handle-unknown-status.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-28 22:09:25 +02:00
Allen Smith
b1c75339c0 openbsd_pkg: add support for removing unused dependencies (#10705)
* openbsd_pkg: add support for removing unused dependencies

Add new state 'rm_unused_deps' that uses 'pkg_delete -a' to remove
packages that are no longer required by any other packages.

Features:
- Requires name='*' to avoid accidental usage
- Supports check mode, diff mode, clean and quick flags
- Follows existing module patterns for error handling
- Integrates with existing package list comparison for change detection

* Update the PR number in the frgment link

* Fix the changelog fragment name to include the PR #

* Force non-interactive mode like most of the other modes

* Fix PEP8 E302: add missing blank line before function definition

* Ensure that no matter what, if the package list unchanged then there was no change

Also removed some unused vars from the original code.

* Standardize names in the PR

* Swap over from a new state to implementing an autoremove option

Added code to handle the case where you git a name or list of names as
pkg_delete will correctly filter what it autoremove by the names

* Update the fragment to match the new code

* typo in EXAMPLES

* Fix up a yamllint complaint.

I do note the following:

```
$ ansible-lint tests/test_openbsd_pkg.yml

Passed: 0 failure(s), 0 warning(s) on 1 files. Last profile that met the validation criteria was 'production'.
```

Although that could be due to local config

* While here add realistic examples of packages that might be autoinstalled

* Clean up docs.

Co-authored-by: Felix Fontein <felix@fontein.de>

* Autoremove is an option, work like the other package managers

* Update changelog for openbsd_pkg autoremove parameter

Clarified the behavior of the `autoremove` parameter to specify it removes autoinstalled packages. Removed flowery text that isn't needed.

* Cut the rest of the cruft out of the changelog fragment

Make it obvious how '*' can be used as a 'name:'
Be more pythonic in the package list comparison.

* Update changelogs/fragments/10705-openbsd-pkg-remove-unused.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-28 22:09:15 +02:00
Felix Fontein
9d0866bfb8 Add ignores necessary for ansible-core 2.20 (#10755)
Add ignores necessary for ansible-core 2.20 if Python 2.7 is still supported by the collection.
2025-08-28 21:33:09 +02:00
Dexter
c881be0999 pacemaker_cluster: deprecate cleanup state (#10741)
* Add deprecation for pacemaker_cluster cleanup state

* Add changelog fragment

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-27 22:04:01 +02:00
Dexter
3b09e9d9ed pacemaker_resource: add cleanup state (#10413)
* refactor(deprecate): Add cleanup deprecations for pacemaker_cluster

* Additional code review changes

* Add changelog fragment
2025-08-27 22:02:59 +02:00
Dexter
6332175493 pacemaker: Add regex checking for maintenance-mode (#10707)
* Add regex checking for maintenance-mode

* Add changelog fragment

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-27 22:01:47 +02:00
Abhijeet Kasurde
b5a2c5812c random_string: Specify seed while generating random string (#10710)
* random_string: Specify seed while generating random string

* Allow user to specify seed to generate random string

Fixes: #5362

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-27 22:00:44 +02:00
Alexei Znamensky
ded43714d3 django module, module_utils: adjustments (#10684)
* django module, module_utils: adjustments

* fix name

* more fixes

* more fixes

* further simplification

* add changelog frag
2025-08-27 21:53:20 +02:00
Felix Fontein
5ee02297b0 ssh_config tests: remove paramiko version restriction (#10732)
Remove paramiko version restriction for ssh_config tests.
2025-08-25 06:56:44 +02:00
Felix Fontein
82b37bdb56 pacman: re-enable yay test (#10728)
Re-enable yay test.
2025-08-25 06:42:35 +02:00
Alexei Znamensky
cb84fa740a remove extra brackets when params are a given by a comprehension (#10712)
* remove extra brackets when function params are a given by a comprehension

* add changelog frag
2025-08-23 19:14:39 +02:00
Alexei Znamensky
62fa3e6f2b remove trailing comma in dict(parameters,) (#10711)
* remove trailing comma in dict(parameters,)

* add changelog frag
2025-08-23 19:13:20 +02:00
Felix Fontein
5eab0f2419 CI: Remove no longer necessary constraints (#10706)
Remove no longer necessary constraints.
2025-08-23 18:41:38 +02:00
Marc Urben
177b385dfb Add support for gpg-auto-import-keys option to zypper (#10661)
* Add support for gpg-auto-import-keys option to zypper

* Add changelog fragment

* Add missing module argument_spec

* Improving documentation

* Improve changelog fragment
2025-08-23 18:38:00 +02:00
weisheng-p
65bc47068e GitHub app access token lookup: allow to use PyJWT + cryptography instead of jwt (#10664)
* Fix issue #10299

* Fix issue #10299

* Fix blank lines

* Fix blank lines

* Add compatibility changes for jwt

* Bump to a higher magic number

* Update change log fragment

* Update changelogs/fragments/10299-github_app_access_token-lookup.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/10299-github_app_access_token-lookup.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/10299-github_app_access_token-lookup.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/github_app_access_token.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/lookup/github_app_access_token.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update requirement document

* Remove a whitespace

---------

Co-authored-by: Bruno Lavoie <bruno.lavoie@dti.ulaval.ca>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-23 18:36:53 +02:00
Dexter
e43735659a pacemaker_stonith: new module (#10195)
* feat(initial): Add pacemaker_stonith module and unit tests

* feat(initial): Add working changes to pacemaker_stonith

* refactor(review): Apply code review suggestions

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* refactor(review): Additional code review items

* bug(cli_action): Add missing runner arguments

* Apply code review suggestions

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Apply suggestions from code review

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* WIP

* Apply doc changes to pacemaker stonith

* Update plugins/modules/pacemaker_stonith.py

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-23 18:36:32 +02:00
mscherer
09f11523d1 Add cpu limit argument to scaleway_container (#10646)
Add cpu limit arguments

And document the units used for memory_limit and cpu_limit.
2025-08-23 18:36:00 +02:00
Alexei Znamensky
9e86d239d2 oci/oracle: deprecation (#10652)
* oci/oracle: deprecation

* add changelog frag

* add doc frags to changelog frag

* Update changelogs/fragments/10652-oracle-deprecation.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-23 18:35:37 +02:00
David Phillips
1c0eb9ddf4 gitlab_*_access_token: add planner access level (#10679)
The Planner role was introduced in December 2024 with GitLab 17.7 [1].
Allow its use in gitlab_project_access_token and
gitlab_group_access_token.

[1]: https://about.gitlab.com/releases/2024/12/19/gitlab-17-7-released/
2025-08-23 18:35:17 +02:00
mscherer
29b35022cf Add a scaleway group to be able to use module_defaults (#10647) 2025-08-23 18:34:52 +02:00
bofo540
db7757ed4b Update documentation (#10696)
* Update documentation

Added to the description explaining the mode of operation and the protocol being used.
This would add to the user experience and saves time for the user.

* use single quotes around colon contained list element to satisfy linter

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* documentation of nagios module - included all nagios configuration paths in plugins/modules/nagios.py

* used italic code I(...) for paths

* added trailing comma to nagios.cfg path listing

Co-authored-by: Felix Fontein <felix@fontein.de>

* added trailing period after icinga path listing.

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: bjt-user <bjoern.foersterling@web.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-23 18:34:21 +02:00
Alexei Znamensky
9f4bb3a788 django_check: rename database param, add alias (#10700)
* django_check: rename database param, add alias

* add changelog frag

* Update plugins/modules/django_check.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-23 18:33:52 +02:00
Alexei Znamensky
3b9acafc72 update requirements for Python versions currently used (#10701) 2025-08-19 17:01:49 +12:00
Dexter
b9385d7fe8 pacemaker_resource: Fix resource_type parameter (#10663)
* Ensure resource standard, provider, and name are proper format

* Add changelog fragment

* Update changelogs/fragments/10663-pacemaker-resource-fix-resource-type.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-18 20:15:20 +02:00
dependabot[bot]
6827680cda build(deps): bump actions/checkout from 4 to 5 in the ci group (#10695)
Bumps the ci group with 1 update: [actions/checkout](https://github.com/actions/checkout).


Updates `actions/checkout` from 4 to 5
- [Release notes](https://github.com/actions/checkout/releases)
- [Changelog](https://github.com/actions/checkout/blob/main/CHANGELOG.md)
- [Commits](https://github.com/actions/checkout/compare/v4...v5)

---
updated-dependencies:
- dependency-name: actions/checkout
  dependency-version: '5'
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: ci
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2025-08-18 18:18:42 +02:00
Felix Fontein
47e8a3c193 ansible-core 2.20: avoid deprecated functionality (#10687)
Avoid deprecated functionality.
2025-08-18 06:25:23 +02:00
Felix Fontein
ceba0cbedb pids: avoid type error if name is empty (#10688)
Avoid type error if name is empty.
2025-08-18 06:24:30 +02:00
Alexei Znamensky
c84f16c5e9 scaleway_lb: fix RETURN docs (#10617)
* scaleway_lb: fix RETURN docs

* remove outer dict from sample content
2025-08-17 17:05:57 +02:00
Daniel Hoffend
735a066d92 apache2_module: updated cgi action conditions (#10423)
* apache2_module: updated cgi action conditions

Only the activation of the cgi module in threaded mode should be a
restriction due to apache2 limitations, not the deactivation.
Especially when the cgi module isn't enabled yet at all. Fixes #9140

* bug(fix): apache2_module fails to disable cgi module

* Update changelog fragment.

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-17 12:52:49 +02:00
Alexei Znamensky
13bd4b5d82 composer: fix command args as list rather than string (#10669) 2025-08-17 12:43:29 +02:00
Felix Fontein
dfc2a54d16 pacman: temporary disable yay test (#10674)
Temporary disable pacman yay test.
2025-08-15 20:52:41 +02:00
Alexei Znamensky
d84d2397b9 ipa_*: adjust common connection notes to modules (#10668) 2025-08-15 23:48:44 +12:00
Alexei Znamensky
3c0d60740c jc filter: remove skips for FreeBSD (#10657) 2025-08-12 09:46:26 +02:00
Felix Fontein
eb5708a125 CI: Make sure to install Java in Debian Bullseye (#10653)
Make sure to install Java in Debian Bullseye.
2025-08-12 01:09:04 +02:00
Alexei Znamensky
89b6717888 oneview module utils: remove unused import of "os" package (#10644)
* oneview module utils: remove unused import of "os" package

* add changelog frag

* Update changelogs/fragments/10644-oneview-os.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-11 23:21:40 +02:00
Felix Fontein
621bfda7ad Next expected release will be 11.3.0. 2025-08-11 21:36:17 +02:00
Felix Fontein
bc90635e66 pipx examples and tests: fix terminology (#10649)
Fix terminology.
2025-08-11 20:34:44 +02:00
Alexei Znamensky
2aa53706f5 jc filter: remove redundant noqa comment (#10643) 2025-08-11 21:56:49 +12:00
Alexei Znamensky
993e3a736e ipa_*: add common connection notes to modules (#10615)
* ipa_*: add common connection notes to modules

* Update plugins/doc_fragments/ipa.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/doc_fragments/ipa.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-11 06:44:38 +02:00
Klention Mali
92ca379319 lvm_pv - Fixes #10444 - Partition device not found (#10596)
* Skip rescan for partition devices in LVM PV module

Adds a check to prevent unnecessary rescan attempts on partition devices in the LVM physical volume module. When a device is actually a partition, attempting to rescan it via sysfs would fail since partitions don't have a rescan interface.

This change improves error handling by gracefully skipping the rescan operation when dealing with partition devices, avoiding misleading warning messages.

* Rewrote device rescan logic
Added changelog fragment

* Add issue reference to lvm_pv changelog entry
2025-08-11 06:43:47 +02:00
Vladimir Botka
2321d27288 Docs. Remove helpers. (#8647) 2025-08-10 14:14:42 +02:00
Alexei Znamensky
c16cf774d7 xbps: command args as list rather than string (#10608)
* xbps: command args as list rather than string

* add changelog frag
2025-08-10 13:38:47 +02:00
maxblome
f50b52b462 keycloak_realm: Add missing brute force attributes (#10415)
* Add brute_force_strategy

* Add max_temporary_lockouts

* Add changelog

* Update changelogs/fragments/10415-keycloak-realm-brute-force-attributes.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/keycloak_realm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/keycloak_realm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-10 13:38:16 +02:00
Alexei Znamensky
25dc09074e pear: command args as list rather than string (#10601)
* pear: command args as list rather than string

* add changelog frag
2025-08-10 13:36:27 +02:00
Alexei Znamensky
1bd7aac07e open_iscsi: command args as list rather than string (#10599)
* open_iscsi: command args as list rather than string

* add changelog frag
2025-08-10 13:36:14 +02:00
Alexei Znamensky
6b7ec5648d riak: command args as list rather than string (#10603)
* riak: command args as list rather than string

* add changelog frag
2025-08-10 13:36:01 +02:00
Alexei Znamensky
a90759d949 portage: command args as list rather than string (#10602)
* portage: command args as list rather than string

* add changelog frag

* fix pr number in chglog frag
2025-08-10 13:35:45 +02:00
Alexei Znamensky
b1bb034b50 solaris_zone: command args as list rather than string (#10604)
* solaris_zone: command args as list rather than string

* add changelog frag
2025-08-10 13:35:32 +02:00
Alexei Znamensky
2dd74b3f3c swupd: command args as list rather than string (#10605)
* swupd: command args as list rather than string

* add changelog frag
2025-08-10 13:35:20 +02:00
Alexei Znamensky
83ce53136c urpmi: command args as list rather than string (#10606)
* urpmi: command args as list rather than string

* add changelog frag
2025-08-10 13:35:03 +02:00
Alexei Znamensky
9fc5d2ec4d xfs_quota: command args as list rather than string (#10609) 2025-08-10 13:34:30 +02:00
Alexei Znamensky
5d3662b23c timezone: command args as list rather than string (#10612)
* timezone: command args as list rather than string

* adjust attr `update_timezone`

* add changelog frag
2025-08-10 13:34:04 +02:00
Felix Fontein
8960a57d53 Add binary_file lookup (#10616)
* Add binary_file lookup.

* Remove sentence on deprecation.
2025-08-10 13:32:35 +02:00
Alexei Znamensky
4e8a6c03dd infinity: improve RV descriptions (#10618) 2025-08-10 13:29:28 +02:00
Alexei Znamensky
a68ba50466 homectl, maven_artifact: removed redundant comments (#10620)
* homectl, maven_artifact: removed redundant comments

* stacki_hosts: one more redundant comment
2025-08-10 13:29:12 +02:00
Abhijeet Kasurde
9155bc2e53 random_string: add docs to use min_* (#10610)
* random_string: add docs to use min_*

* Update docs for min_* usage

Fixes: #10576

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>

* Review requests

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>

---------

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>
2025-08-06 20:44:26 +02:00
Alexei Znamensky
25163ed87a github_repo: deprecate force_defaults=true (#10435)
* github_repo: deprecate force_defaults=true

* add changelog frag
2025-08-05 06:12:48 +02:00
Felix Fontein
88bd44aea7 rocketchat: deprecate default value of is_pre740 (#10490)
* Deprecate default value of is_pre740.

* Use correct markup.

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

---------

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-08-04 20:32:23 +02:00
Alexei Znamensky
1518b43b85 django module utils: remove deprecated function arg ignore_value_none (#10574)
* django module utils: remove deprecated function arg ignore_value_none

* fix argument order in call from _DjangoRunner to superclass

* add changelog frag
2025-08-04 20:02:10 +02:00
Alexei Znamensky
47ebde3339 logstash_plugin: command args as list rather than string (#10573)
* logstash_plugin: command args as list rather than string

* add changelog frag
2025-08-04 20:02:01 +02:00
desand01
85f6a07b19 Keycloak realm add support for some missing options (#10538)
* First commit

* fixe

* changelog

---------

Co-authored-by: Andre Desrosiers <andre.desrosiers@ssss.gouv.qc.ca>
2025-08-04 20:01:50 +02:00
Alexei Znamensky
40bcfd9646 imgadm: command args as list rather than string (#10536)
* imgadm: command args as list rather than string

* add changelog frag

* Update plugins/modules/imgadm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/imgadm.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-04 20:01:36 +02:00
desand01
7ffeaaa16d Keycloak idp well known url support (#10527)
* first commit

* add and fixe test

* add example

* fragment and sanity

* sanity

* sanity

* Update plugins/modules/keycloak_identity_provider.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/10527-keycloak-idp-well-known-url-support.yml

---------

Co-authored-by: Andre Desrosiers <andre.desrosiers@ssss.gouv.qc.ca>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-04 20:01:05 +02:00
Alexei Znamensky
5bdd82fbf5 composer: command args as list rather than string (#10525)
* composer: command args as list rather than string

* add changelog frag
2025-08-04 20:00:56 +02:00
Alexei Znamensky
4918ecd4c5 easy_install: command args as list rather than string (#10526)
* easy_install: command args as list rather than string

* add changelog frag
2025-08-04 20:00:46 +02:00
Alexei Znamensky
7e2d91e53d capabilities: command args as list rather than string (#10524)
* capabilities: command args as list rather than string

* add changelog frag
2025-08-04 20:00:39 +02:00
Alexei Znamensky
a96684ef40 bzr: command args as list rather than string (#10523)
* bzr: command args as list rather than string

* add changelog frag
2025-08-04 20:00:30 +02:00
Alexei Znamensky
2a4222c0f6 apk: command args as list rather than string (#10520)
* apk: command args as list rather than string

* add changelog frag

* APK_PATH itself should be a list not a string

* fix mock values in unit tests

* keep package names as list

* add package names as list to cmd line
2025-08-04 20:00:23 +02:00
Youssef Ali
d0a1a617af Addressing multiple jenkins_plugins module issue (#10346)
* Fix version compatibility issue

* Add dependencies installation to specific versions

* Seperate Jenkins and updates_url credentials

* Create changelog fragment

* Added a test and some adjustments

* Return to fetch_url

* Add pull link to changelog and modify install latest deps function

* Use updates_url for plugin version if it exists

* Change version number
2025-08-04 20:00:15 +02:00
Dexter
47aec26001 pacemaker_info: new module and enhance cli_action (#10291)
* feat(info): Add pacemaker_info module and enhance cli_action util

This commit adds in the pacemaker_info module which is responsible for
retrieving pacemaker facts. Additionally, the cli_action var has been
refactored for the pacemaker.py util, which is passed through the
runner.

* refactor(version): Bump version_added to 11.2.0

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/pacemaker_info.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* refactor(process): Simplify command output

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-04 20:00:04 +02:00
Klention Mali
e91e2ef6f8 lvm_pv_move_data: new module (#10416)
* Added lvm_pv_move_data module

* Removed trailing whitespace

* Decreased loop devices file size

* Remove test VG if exists

* Force remove test VG if exists

* Renamed test VG and LV names

* Updated assert conditions

* Added .ansible to .gitignore

* Force extending VG

* Wiping LVM metadata from PVs before creating VG

* Clean FS, LV, VG and PSs before run

* Migrated to CmdRunner

* Added more detailed info in case of failure and cosmetic changes

* Remove redundant params from CmdRunner call

* Updates the RETURN documentation block to properly specify the return type
of the 'actions' field:
- Changes return status from 'always' to 'success'
- Adds missing 'elements: str' type specification
2025-08-04 19:59:54 +02:00
Mia-Cross
658af61e17 scaleway: update zone list (#10424)
* changelog fragment

* add new zones

* add new zones to choices for instance resources

* add new zones to doc in inventory plugin

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/10424-scaleway-update-zones.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-04 19:59:45 +02:00
Alexei Znamensky
158f64ca77 bearychat: deprecation (#10514)
* deprecation: bearychat

* add changelog frag

* fix chglog file placement
2025-08-04 19:59:37 +02:00
Alexei Znamensky
6e1821e557 nagios: make services param a list (#10493)
* nagios: make services param a list

* add changelog frag

* nagios: update docs
2025-08-04 19:59:31 +02:00
Alexei Znamensky
e3467385fb cpanm: deprecate mode=compatibility (#10434)
* cpanm: deprecate mode=compatibility

* adjust docs

* add changelog frag
2025-08-04 19:58:59 +02:00
Felix Fontein
710c02ec01 tasks_only callback: add result_format_callback docs fragment (#10422)
Add result_format_callback docs fragment.
2025-08-04 19:58:51 +02:00
Alexei Znamensky
32fbacd9ae sensu_subscription: normalize quotes in return message (#10483)
* sensu_subscription: normalize quotes in return message

* add changelog frag
2025-08-04 19:58:40 +02:00
Felix Fontein
c7e18306fb CI: python-jenkins 1.8.3 fails to import on Python 2.7 (#10570)
python-jenkins 1.8.3 fails to import on Python 2.7.
2025-08-03 13:58:26 +02:00
Felix Fontein
14f706c5dd merge_variables lookup: avoid deprecated Templar.set_temporary_context (#10566)
Avoid deprecated Templar.set_temporary_context.
2025-08-03 12:54:14 +02:00
hakril
bd84f65456 Improve capabilities module by detecting /sbin/getcap error message and stop early with a meaningful error message (#10455)
* modules/capabilities.py: fail & propagate if getcap command error

* Fix comment spacing (pep8)

* Add changelogs fragment for PR 10455

* Update changelogs/fragments/10455-capabilities-improve-error-detection.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: clement rouault <clement.rouault@exatrack.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-08-02 16:50:21 +02:00
Felix Fontein
3de073fb6f json_query: extend list of type aliases for compatibility with ansible-core 2.19 (#10539)
* Extend list of type aliases for json_query.

* Improve tests.

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>

---------

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2025-08-02 16:42:34 +02:00
Felix Fontein
9a29622584 Disable pipelining for doas and machinectl on ansible-core 2.19+ (#10537)
Disable pipelining for doas and machinectl.
2025-08-02 16:41:58 +02:00
Felix Fontein
abfe1e6180 apk: fix empty/whitespace-only package name check (#10532)
* Fix empty/whitespace-only package name check.

* Adjust test.
2025-08-02 16:41:24 +02:00
Felix Fontein
ac4aca2004 diy callback: add test for on_any_msg (#10550)
Add test for on_any_msg.
2025-08-02 16:33:55 +02:00
Felix Fontein
7298f25fe0 Fix no longer valid constructs in tests (#10543)
Fix no longer valid constructs in tests.
2025-08-01 23:46:46 +02:00
Alexei Znamensky
3b551f92fc arg_spec adjustments: modules [t-z]* (#10513)
* arg_spec adjustments: modules [t-z]*

* add changelog frag
2025-08-01 10:56:00 +02:00
Felix Fontein
d0b0aff5bc wsl connection: import paramiko directly (#10531)
Import paramiko directly.
2025-08-01 10:54:26 +02:00
Alexei Znamensky
3bb7a77b14 arg_spec adjustments: modules [o-s]* (#10512)
* arg_spec adjustments: modules [o-s]*

* add changelog frag
2025-07-31 22:46:32 +02:00
Alexei Znamensky
5601ef4c57 arg_spec adjustments: modules [k-n]* (#10507)
* arg_spec adjustments: modules [k-n]*

* adjust lxca tests

* add changelog frag
2025-07-31 22:45:12 +02:00
Alexei Znamensky
0f7cd5473f arg_spec adjustments: modules [g-j]* (#10505)
* arg_spec adjustments: modules [g-j]*

* add changelog frag
2025-07-31 22:43:41 +02:00
freyja
84b5d38c51 Change description of nopasswd parameter for sudoers to be more clear (#10506)
Update sudoers.py

Made the description of nopasswd more clear
2025-07-30 06:16:20 +02:00
Felix Fontein
6ce9f805a8 CI: Add Python 3.14 unit tests (#10511)
* Add Python 3.14 unit tests.

* Skip test if github cannot be imported.

It currently cannot be imported because nacl isn't compatible with Python 3.14 yet,
and importing github indirectly tries to import nacl, which fails as it uses a
type from typing that got removed in 3.14.

* Skip test if paramiko cannot be imported.
2025-07-29 22:08:28 +02:00
Felix Fontein
69bcb88efe Update Python versions for CI (#10508)
* Update Python versions for CI.

* Disable Python 3.14 temporarily.
2025-07-29 06:54:54 +02:00
David Lundgren
44ca366173 sysrc: refactor (#10417)
* sysrc: refactor

* sysrc: refactor changelog fragment

* sysrc: forgot the os import

* sysrc: update test to edit the correct file

* sysrc: Added copyright info to the test conf file

* sysrc: Added full copyright info to the test conf file

* sysrc: Detect permission denied when using sysrc

* sysrc: Fixed the permission check and 2.7 compatibility

* sysrc: Fix typo of import

* sysrc: Fix err.find check

* sysrc: Add bugfixes changelog fragment

* sysrc: Use `StateModuleHelper`

* sysrc: updated imports

* sysrc: remove re import and set errno.EACCES on the OSError

* sysrc: format code properly

* sysrc: fix Python 2.7 compatibility and set changed manually

* sysrc: add missing name format check

Also use `self.module.fail_json` through out

* sysrc: Removed os import by accident

* sysrc: updated per review, and the way the existing value is retrieved
2025-07-28 19:01:44 +02:00
Alexei Znamensky
15d3ea123d remove common return values from docs (#10485)
* remove common return values from docs

* pacman: add note about version added of RV
2025-07-28 18:46:02 +02:00
Alexei Znamensky
736ce1983d arg_spec adjustments: modules [a-f]* (#10494)
* arg_spec adjustments: modules [a-f]*

* add changelog frag

* Update changelogs/fragments/10494-rfdn-1.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-28 18:44:25 +02:00
Felix Fontein
de0618b843 irc: fix wrap_socket() call when validate_certs=true and use_tls=true (#10491)
Fix wrap_socket() call when validate_certs=true and use_tls=true.
2025-07-28 06:32:23 +02:00
Giorgos Drosos
1f8b5eea4c cronvar: Handle empty value string properly (#10445)
* Fix empty  value issue  in cronvar

* Update changelog

* Update plugins/modules/cronvar.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/10445-cronvar-reject-empty-values.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update tests/integration/targets/cronvar/tasks/main.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update tests/integration/targets/cronvar/tasks/main.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Accept empty strings on cronvar

* Update plugins/modules/cronvar.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update main.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-28 06:31:51 +02:00
Felix Fontein
a692888478 Normalize changelog configs. 2025-07-27 16:36:35 +02:00
Alexei Znamensky
7b05484d8f doc style adjustments: modules [rtuvx]* (#10466)
* doc style adjustments: modules r*

* doc style adjustments: modules t*

* doc style adjustments: modules u*

* doc style adjustments: modules v*

* doc style adjustments: modules x*

* Update plugins/modules/redis_data.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-27 15:59:49 +02:00
Alexei Znamensky
c1bd461173 doc style adjustments: modules s* (#10480)
* doc style adjustments: modules s*

* adjust comment indentation

* remove empty RETURN section in stacki_host

* spectrum_model_attrs: improve formatting of example

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/spotinst_aws_elastigroup.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/swdepot.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-27 15:59:26 +02:00
Alexei Znamensky
dc7d791d12 doc style adjustments: modules [yz]* (#10481)
* doc style adjustments: modules y*

* doc style adjustments: modules z*
2025-07-27 15:58:50 +02:00
Giorgos Drosos
3ad57ffa67 Ensure apk handles empty name strings properly (#10442)
* Ensure apk handles empty name strings

* Update changelog

* Update tests/integration/targets/apk/tasks/main.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelogs/fragments/10442-apk-fix-empty-names.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Remove redundant conditional

* Remove redundant ignore errors

* Reject apk with update cache for empty package names

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-27 11:52:07 +02:00
Giorgos Drosos
fe59c6d29e listen_ports_facts: Avoid crash when required commands are missing (#10458)
* Fix listen-port-facts crash

* Update changelog

* Update tests/integration/targets/listen_ports_facts/tasks/main.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fix sanity tests

* Update changelogs/fragments/10458-listen_port_facts-prevent-type-error.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-27 11:51:13 +02:00
Giorgos Drosos
cc13f42be4 Fix cronvar crash when parent dir of cron_file is missing (#10461)
* Fix cronvar crash on non existent directories

* Update changelog

* Fix small variable bug

* Fix trailing witespace

* Fix CI issues

* Update changelogs/fragments/10461-cronvar-non-existent-dir-crash-fix.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/cronvar.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-27 11:50:18 +02:00
Felix Fontein
ee7830667a Fix ansible-core 2.19 deprecations (#10459)
Do not return warnings.
2025-07-27 11:49:27 +02:00
Alexei Znamensky
d288555fd9 doc style adjustments: modules p* (#10463)
* doc style adjustments: modules p*

* Update plugins/modules/pacemaker_resource.py

* Update plugins/modules/pagerduty_alert.py

* Update plugins/modules/pear.py

* Update plugins/modules/portage.py

* reformat

* adjustment from review

* Update plugins/modules/pkg5_publisher.py

Co-authored-by: Peter Oliver <github.com@mavit.org.uk>

---------

Co-authored-by: Peter Oliver <github.com@mavit.org.uk>
2025-07-27 11:48:50 +02:00
Felix Fontein
b458ee85ce CI: Bump Alpine 3.21 to 3.22, Fedora 41 to 42, and FreeBSD 14.2 to 14.3 (#10462)
* Bump Alpine 3.21 to 3.22, Fedora 41 to 42, RHEL 9.5 to 9.6, and FreeBSD 14.2 to 14.3.

Add old versions to stable-2.19 if not present yet.

* Add some expected skips.

* Add more restrictions.

* Another try for Android tests.

* Another try.

* Another try.
2025-07-26 14:08:20 +02:00
Alexei Znamensky
6d67546902 doc style adjustments: modules [no]* (#10443)
* doc style adjustments: modules n*

* doc style adjustments: modules o*

* Apply suggestions from code review

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-25 08:52:01 +02:00
Felix Fontein
f1f7d9b038 CI: Disable zpool tests on Alpine (#10449)
Disable zpool tests on Alpine.
2025-07-24 22:29:16 +02:00
Felix Fontein
01f3248a12 CI: Replace FreeBSD 13.3 with 13.5 (#10446)
Replace FreeBSD 13.3 with 13.5.
2025-07-24 17:43:21 +02:00
Alexei Znamensky
69d479f06c doc style adjustments: modules [lm]* (#10433)
* doc style adjustments: modules l*

* doc style adjustments: modules m*

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/logstash_plugin.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-21 22:09:09 +02:00
Felix Fontein
bc4d06ef34 Fix dnf_versionlock examples (#10428)
Fix dnf_versionlock examples.
2025-07-18 23:03:10 +02:00
Alexei Znamensky
14f13daa99 doc style adjustments: modules [jk]* (#10420)
* doc style adjustments: modules j*

* doc style adjustments: modules k*

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/keycloak_realm_key.py

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-18 01:22:59 +02:00
Felix Fontein
77cd018427 Next expected release will be 11.2.0. 2025-07-14 15:38:19 +02:00
Alexei Znamensky
a36ad54b53 doc style adjustments: modules i* (#10409) 2025-07-14 15:14:20 +02:00
Dexter
283d947f17 pacemaker_cluster: enhancements and add unit tests (#10227)
* feat(initial): Add unit tests and rewrite pacemaker_cluster

This commit introduces unit tests and pacemaker_cluster module rewrite
to use the pacemaker module utils.

* feat(cleanup): Various fixes and add resource state

This commit migrates the pacemaker_cluster's cleanup state to the
pacemaker_resource module. Additionally, the unit tests for
pacemaker_cluster have been corrected to proper mock run command order.

* doc(botmeta): Add author to pacemaker_cluster

* style(whitespace): Cleanup test files

* refactor(cleanup): Remove unused state value

* bug(fix): Parse apply_all as separate option

* refactor(review): Apply code review suggestions

This commit refactors breaking changes in pacemaker_cluster module into
deprecated features. The following will be scheduled for deprecation:
`state: cleanup` and `state: None`.

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* refactor(review): Additional review suggestions

* refactor(deprecations): Remove all deprecation changes

* refactor(review): Enhance rename changelog entry and fix empty string logic

* refactor(cleanup): Remove from pacemaker_resource

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

* refactor(review): Add changelog and revert required name

* revert(default): Use default state=present

* Update changelogs/fragments/10227-pacemaker-cluster-and-resource-enhancement.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update changelog fragment.

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-14 07:48:36 +02:00
Felix Fontein
4801b0fc00 manageiq_provider: fix docs markup (#10399)
* Fix docs markup.

* Add one more.

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>

* Update plugins/modules/manageiq_provider.py

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* More fixes.

---------

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-14 07:23:12 +02:00
Alexei Znamensky
5e2ffb845f doc style adjustments: modules [cd]* (#10397)
* doc style adjustments: modules c*

* doc style adjustments: modules d*

* Update plugins/modules/consul_agent_check.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-13 21:03:09 +00:00
Felix Fontein
3787808e72 iocage inventory guide: adjust filenames, fix typo (#10396)
* Rename iocage inventory guide files.

* Fix typo.
2025-07-13 22:27:31 +02:00
Alexei Znamensky
717ef51137 doc style adjustments: modules [efgh]* (#10398)
* doc style adjustments: modules e*

* doc style adjustments: modules f*

* doc style adjustments: modules g*

* doc style adjustments: modules h*

* Update plugins/modules/easy_install.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-13 17:14:40 +02:00
Vladimir Botka
563b29e12a Added docs Inventory Guide. (#10239)
* Added docs Inventory Guide.

* Errata docs Inventory Guide.

* Fix docs Inventory Guide error: use ASCII quotes.

* Fix docs Inventory Guide various lint errors.

* Added docs Inventory Guide BOTMETA entries.

* Fix docs Inventory Guide lint errors: trailing whitespace

* Fix docs Inventory Guide lint errors: force yaml pygment

* Fix docs Inventory Guide lint errors: No way to force yaml pygment in code-block

* Update docs/docsite/rst/inventory_guide_iocage.rst

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update docs/docsite/rst/inventory_guide_iocage_aliases.rst

Thank you for the explanation!

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update docs/docsite/rst/inventory_guide_iocage_aliases.rst

Co-authored-by: Felix Fontein <felix@fontein.de>

* Updated docs Inventory Guide.

* Problematic pygments changed to 'console'.

* Update docs/docsite/rst/inventory_guide_iocage_hooks.rst
  Update docs/docsite/rst/inventory_guide_iocage_properties.rst
  Update docs/docsite/rst/inventory_guide_iocage_hooks.rst

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Put dhclient-exit-hooks into the sh code-block.

* Fix the code-block.

* Update docs/docsite/rst/inventory_guide_iocage.rst
  Update docs/docsite/rst/inventory_guide_iocage_aliases.rst
  Update docs/docsite/rst/inventory_guide_iocage_basics.rst

Co-authored-by: Felix Fontein <felix@fontein.de>

* Remove tabs.

* Update docs/docsite/rst/inventory_guide_iocage_basics.rst

Co-authored-by: Felix Fontein <felix@fontein.de>

* Indent the note block.

* Update docs/docsite/rst/inventory_guide_iocage_hooks.rst
  Update docs/docsite/rst/inventory_guide_iocage_dhcp.rst
  Update docs/docsite/rst/inventory_guide_iocage_hooks.rst

Co-authored-by: Felix Fontein <felix@fontein.de>

* Fix ansval.

* Add guide_iocage.rst and inventory_guide_iocage*.rst

* Fix 'disallowed language sh found'.

* Remove note block.

* Remove include which triggers a bug in rstcheck.

* Update docs/docsite/extra-docs.yml
  Update docs/docsite/rst/iocage_inventory_guide_basics.rst
  Update docs/docsite/rst/iocage_inventory_guide_dhcp.rst
  Update docs/docsite/rst/iocage_inventory_guide_hooks.rst
  Update docs/docsite/rst/iocage_inventory_guide_properties.rst
  Update docs/docsite/rst/iocage_inventory_guide_tags.rst
  Update docs/docsite/rst/iocage_inventory_guide_hooks.rst
  Update docs/docsite/rst/iocage_inventory_guide_properties.rst

Co-authored-by: Felix Fontein <felix@fontein.de>

* Put man iocage quotation into the text code block.

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-12 20:43:07 +02:00
Abhijeet Kasurde
baf1cdec09 Enable hg integration test (#10385)
Fixes: #10044

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>
2025-07-12 12:34:18 +02:00
Aditya Putta
731f0be3f4 Configure LUKS encrypted volume using crypttab (#10333) 2025-07-12 22:28:57 +12:00
Aditya Putta
20e9ef877f community.general.easy_install : use of the virtualenv_command parameter (#10380)
* community.general.easy_install :  use of the virtualenv_command parameter

* Apply suggestions from code review

---------

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2025-07-12 12:05:42 +02:00
Felix Fontein
1a7aafc037 lvg examples: use YAML lists (#10363)
Use YAML lists.
2025-07-11 06:07:21 +01:00
Felix Fontein
a0200d1130 Disable lmdb_kv integration tests (#10374)
Disable lmdb_kv integration tests.
2025-07-10 21:17:06 +02:00
Bruno Lavoie
e5b37c3ffd github_release - support multiple type of tokens (#10339)
* Support multiple type of tokens

* Add missing spaces around operator.

* Add changelog fragments.

* fix logic, missing NOT

* Update changelogs/fragments/10339-github_app_access_token.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-08 22:20:04 +02:00
Abhijeet Kasurde
096fa388ac logstash: Remove reference to Python 2 library (#10345)
* logstash: Remove reference to Python 2 library

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>

* Review requests

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Signed-off-by: Abhijeet Kasurde <Akasurde@redhat.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-08 22:19:37 +02:00
Felix Fontein
f2286701c8 Add tasks_only callback (#10347)
* Add tasks_only callback.

* Improve tests.

* Fix option name.

* Add missing s.

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>

* Add ignore.txt entry.

---------

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2025-07-08 07:18:19 +02:00
Felix Fontein
49975b383a Remove no longer needed ignore-2.15.txt. 2025-07-08 06:42:22 +02:00
Felix Fontein
16d6e4a8e5 dependent lookup: avoid deprecated ansible-core 2.19 functionality (#10359)
* Avoid deprecated ansible-core 2.19 functionality.

* Adjust unit tests.
2025-07-08 06:40:54 +02:00
Stéphane Graber
4195cbb364 incus_connection: Improve error handling (#10349)
Related to #10344

This tweaks the error handling logic to work with more versions of Incus
as well as catching some of the project and instance access errors.

The full context (instance name, project name and remote name) is now
included so that the user can easily diagnose access problems.

Signed-off-by: Stéphane Graber <stgraber@stgraber.org>
2025-07-07 20:52:55 +02:00
Alexei Znamensky
7a4448d45c doc style adjustments: modules [ab]* (#10350)
* doc style adjustments: modules [ab]*

* Update plugins/modules/btrfs_subvolume.py

* Update plugins/modules/aerospike_migrations.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/aix_filesystem.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/bigpanda.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* aix_filesystems: roll back wording for `filesystem` description

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-07 20:50:35 +02:00
Aditya Putta
5ef1cad64f Using add_keys_to_agent in ssh_config module (#10337)
* Using add_keys_to_agent in ssh_config module

* removed white space

* Apply suggestion

---------

Co-authored-by: Abhijeet Kasurde <akasurde@redhat.com>
2025-07-06 08:45:31 +12:00
Alexei Znamensky
7959d971a4 nmcli: improvements (#10323)
* better handling of parameter validation

* execute_command is always called with list arg

* minor improvements

* add changelog frag
2025-07-05 14:52:15 +02:00
Aditya Putta
2ec3d02215 jenkins_build: docs example for trigger with custom polling interval (#10335) 2025-07-05 18:03:56 +12:00
Aditya Putta
dd13592034 lvg: add docs example for preserving existing PVs in a volume group using remove_extra_pvs: false (#10336) 2025-07-05 17:17:00 +12:00
Aditya Putta
79509a533d flatpak: add docs example for install using custom executable path (#10334) 2025-07-05 17:13:44 +12:00
Alexei Znamensky
66139679e1 catapult: deprecation (#10329)
* catapult: deprecation

* add changelog frag

* Update changelogs/fragments/10329-catapult-deprecation.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update meta/runtime.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/catapult.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/catapult.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-04 06:26:16 +02:00
Alexei Znamensky
682a89cdf5 remove unnecessary brackets in conditions (#10328)
* remove unnecessary brackets in conditions

* add changelog frag
2025-07-03 06:46:50 +02:00
Alexei Znamensky
5a5b2d2eed remove unnecessary checks for unsupported python versions (#10327) 2025-07-02 10:23:58 +12:00
Felix Fontein
4323058809 Adjust README. 2025-07-01 22:36:21 +02:00
Alexei Znamensky
580ac1e30d fix style in plugins (#10302)
Co-authored-by: Felix Fontein <felix@fontein.de>
2025-07-02 01:15:50 +12:00
Alexei Znamensky
329c2222fc fix style in plugins (#10303) 2025-07-02 01:15:01 +12:00
Felix Fontein
dd3c253b78 CI: Add stable-2.19 (#10319)
* Add ignore-2.20.txt.

* Add stable-2.19 to CI.
2025-07-01 07:39:13 +02:00
Felix Fontein
7e66fb052e CI: Add yamllint for YAML files, plugin/module docs, and YAML in extra docs (#10279)
* Add yamllint to CI.

* Fix more YAML booleans.
2025-06-30 20:46:56 +02:00
Felix Fontein
41855418bb CI: add checks for code block types in extra docs (#10280)
* Add checks for code block types in extra docs.

* Add 'ini' and 'text' to allowlist.
2025-06-30 20:16:22 +02:00
Alexei Znamensky
cc2e067907 htpasswd: doc adjustment (#10313) 2025-07-01 00:37:01 +12:00
Felix Fontein
3b5a9779b4 Add comment that transform_recursively should no longer be needed. 2025-06-29 09:50:46 +02:00
Alexei Znamensky
5462b1cff8 xfconf: small refactor (#10311)
* xfconf: small refactor

* add changelog frag

* Update changelogs/fragments/10311-xfconf-refactor.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-28 13:04:28 +02:00
alice seaborn
7d06be1c20 fix typo in ipa_dnsrecord module examples (#10304)
[FIX] Typo in ipa_dnsrecord example

Simple comma instead of a period, easy mistake.
2025-06-26 22:03:36 +02:00
Felix Fontein
af8c586e29 Docs: use :anscollection: (#10297)
Use :anscollection:.
2025-06-25 21:41:50 +02:00
Wade Simmons
1ed0f329bc slack: support slack-gov.com (#10270)
* slack: support slack-gov.com

Allow the slack module to work with GovSlack, hosted at https://slack-gov.com/

This re-uses the existing `domain` option so that users can set it to
`slack-gov.com` to use GovSlack. To maintain backwards compatibility,
any setting of `domain` for WebAPI tokens that is not `slack.com` or
`slack-gov.com` is ignored.

* fixup

* cleanup

* fix pep8

* clean up docs and better function name

* document default value

* try to fix yaml, not sure what is wrong

* Update plugins/modules/slack.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/slack.py

Co-authored-by: Felix Fontein <felix@fontein.de>

* Update plugins/modules/slack.py

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-25 08:08:12 +02:00
Alexei Znamensky
dd53a2cee0 cloudflare_dns: some refactoring (#10269)
* cloudflare_dns: remove extraneous validation

* further improvements

* revert the first validation removed

* simplify validation for types SRC and CAA

* add changelog frag
2025-06-25 08:07:51 +02:00
YoussefKhalidAli
52cd104962 jenkins_credentials: new module to manage Jenkins credentials (#10170)
* Added Jenkins credentials module to manage Jenkins credentials

* Added Jenkins credentials module to manage Jenkins credentials

* Added import error detection, adjusted indentation, and general enhancements.

* Added py3 requirement and set files value to avoid errors

* Added username to BOTMETA. Switched to format() instead of f strings to support py 2.7, improved delete function, and added function to read private key

* Remove redundant message

Co-authored-by: Felix Fontein <felix@fontein.de>

* Replaced requests with ansible.module_utils.urls, merged check domain and credential functions, and made minor adjustments to documentation

* Adjusted for py 2.7 compatibility

* Replaced command with state.

* Added managing credentials within a folder and made adjustments to documentation

* Added unit and integration tests, added token managament, and adjusted documentation.

* Added unit and integration tests, added token management, and adjusted documentation.(fix)

* Fix BOTMETA.yml

* Removed files and generate them at runtime.

* moved id and token checks to required_if

* Documentation changes, different test setup, and switched to Ansible testing tools

* Fixed typos

* Correct indentation.

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-24 06:27:24 +02:00
Alexei Znamensky
e37cd1a015 fix YAML docs in multiple plugins (#10286)
* fix YAML docs in multiple plugins

* pfexec: fix short description

* adjust callback plugins

* fix wsl connection

* fix filter plugins

* fix inventory plugins

* minor adjustments in diy, print_task, xen_orchestra
2025-06-24 06:23:46 +02:00
Alexei Znamensky
3ab7a898c6 replace concatenations with f-string in plugins (#10285)
* replace concatenations with f-string in plugins

* add changelog frag
2025-06-23 21:10:19 +02:00
Alexei Znamensky
d4f2b2fb55 sl_vm: update docs about requirements (#10282)
* sl_vm: update docs about requirements

* Update plugins/modules/sl_vm.py
2025-06-19 21:28:03 +02:00
Titus Sanchez
b7f9f24ffe cloudflare_dns: Add PTR record support (#10267)
* cloudflare_dns: Add PTR record support

* Add changelog fragment

* Apply suggestions from code review

Co-authored-by: Felix Fontein <felix@fontein.de>

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-19 07:26:13 +02:00
Felix Fontein
40fb0f0c75 Inventory plugins: remove deprecated disable_lookups parameter (which was set to its default anyway) (#10271)
* Remove default value for keyword argument that is deprecated since ansible-core 2.19.

* Add changelog fragment.
2025-06-18 21:38:59 +02:00
Felix Fontein
5b14129c8f sysrc jail tests: FreeBSD 14.1 stopped working (#10272)
FreeBSD 14.1 stopped working.
2025-06-18 21:38:44 +02:00
divinity666
f44ca23d7a keycloak: add support for client_credentials authentication (#10231)
* add client_credentials authentication for keycloak tasks incl. test case

* support client credentials in all keycloak modules

* Add changelog fragment

* fix typos in required list

* Update changelogs/fragments/10231-keycloak-add-client-credentials-authentication.yml

Co-authored-by: Felix Fontein <felix@fontein.de>

* revert keycloak url in test environment

---------

Co-authored-by: Felix Fontein <felix@fontein.de>
2025-06-18 07:40:46 +02:00
Alexei Znamensky
74ed0fc438 import mocks from community.internal_test_tools (#10264) 2025-06-17 19:32:41 +12:00
Felix Fontein
38ab1fbb88 Extra docs: normalize code block language (#10261)
Extra docs: normalize code block language.
2025-06-17 11:04:24 +12:00
Felix Fontein
49d84e7b97 Update CI schedule. 2025-06-16 20:11:52 +02:00
Alexei Znamensky
7faf59cf2e typetalk: deprecation (#9499)
* typetalk: deprecation

* add changelog frag
2025-06-16 20:06:03 +02:00
Felix Fontein
760e7393c9 The next expected release will be 11.1.0. 2025-06-16 19:58:41 +02:00
1566 changed files with 98492 additions and 78760 deletions

View File

@@ -70,6 +70,19 @@ stages:
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_21
displayName: Sanity 2.21
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.21/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_20
displayName: Sanity 2.20
dependsOn: []
@@ -96,19 +109,6 @@ stages:
- test: 2
- test: 3
- test: 4
- stage: Sanity_2_18
displayName: Sanity 2.18
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Test {0}
testFormat: 2.18/sanity/{0}
targets:
- test: 1
- test: 2
- test: 3
- test: 4
### Units
- stage: Units_devel
displayName: Units devel
@@ -125,6 +125,19 @@ stages:
- test: '3.12'
- test: '3.13'
- test: '3.14'
- test: '3.15'
- stage: Units_2_21
displayName: Units 2.21
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.21/units/{0}/1
targets:
- test: 3.9
- test: "3.12"
- test: "3.14"
- stage: Units_2_20
displayName: Units 2.20
dependsOn: []
@@ -149,18 +162,6 @@ stages:
- test: 3.8
- test: "3.11"
- test: "3.13"
- stage: Units_2_18
displayName: Units 2.18
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
nameFormat: Python {0}
testFormat: 2.18/units/{0}/1
targets:
- test: 3.8
- test: "3.11"
- test: "3.13"
## Remote
- stage: Remote_devel_extra_vms
@@ -173,8 +174,8 @@ stages:
targets:
- name: Alpine 3.23
test: alpine/3.23
# - name: Fedora 43
# test: fedora/43
# - name: Fedora 44
# test: fedora/44
- name: Ubuntu 22.04
test: ubuntu/22.04
- name: Ubuntu 24.04
@@ -189,8 +190,8 @@ stages:
parameters:
testFormat: devel/{0}
targets:
- name: macOS 15.3
test: macos/15.3
- name: macOS 26.3
test: macos/26.3
- name: RHEL 10.1
test: rhel/10.1
- name: RHEL 9.7
@@ -198,8 +199,27 @@ stages:
# TODO: enable this ASAP!
# - name: FreeBSD 15.0
# test: freebsd/15.0
- name: FreeBSD 14.3
test: freebsd/14.3
# TODO: enable this ASAP!
# - name: FreeBSD 14.4
# test: freebsd/14.4
groups:
- 1
- 2
- 3
- stage: Remote_2_21
displayName: Remote 2.21
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.21/{0}
targets:
# - name: macOS 26.3
# test: macos/26.3
- name: RHEL 10.1
test: rhel/10.1
# - name: RHEL 9.7
# test: rhel/9.7
groups:
- 1
- 2
@@ -212,6 +232,8 @@ stages:
parameters:
testFormat: 2.20/{0}
targets:
- name: macOS 15.3
test: macos/15.3
- name: RHEL 10.1
test: rhel/10.1
- name: FreeBSD 14.3
@@ -236,22 +258,6 @@ stages:
- 1
- 2
- 3
- stage: Remote_2_18
displayName: Remote 2.18
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.18/{0}
targets:
- name: macOS 14.3
test: macos/14.3
- name: FreeBSD 14.1
test: freebsd/14.1
groups:
- 1
- 2
- 3
### Docker
- stage: Docker_devel
@@ -262,8 +268,8 @@ stages:
parameters:
testFormat: devel/linux/{0}
targets:
- name: Fedora 43
test: fedora43
- name: Fedora 44
test: fedora44
- name: Alpine 3.23
test: alpine323
- name: Ubuntu 22.04
@@ -274,6 +280,26 @@ stages:
- 1
- 2
- 3
- stage: Docker_2_21
displayName: Docker 2.21
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.21/linux/{0}
targets:
- name: Fedora 43
test: fedora43
# - name: Alpine 3.23
# test: alpine323
# - name: Ubuntu 22.04
# test: ubuntu2204
- name: Ubuntu 24.04
test: ubuntu2404
groups:
- 1
- 2
- 3
- stage: Docker_2_20
displayName: Docker 2.20
dependsOn: []
@@ -306,24 +332,6 @@ stages:
- 1
- 2
- 3
- stage: Docker_2_18
displayName: Docker 2.18
dependsOn: []
jobs:
- template: templates/matrix.yml
parameters:
testFormat: 2.18/linux/{0}
targets:
- name: Fedora 40
test: fedora40
- name: Alpine 3.20
test: alpine320
- name: Ubuntu 24.04
test: ubuntu2404
groups:
- 1
- 2
- 3
### Community Docker
- stage: Docker_community_devel
@@ -359,6 +367,18 @@ stages:
# testFormat: devel/generic/{0}/1
# targets:
# - test: '3.9'
# - test: '3.13'
# - test: '3.15'
# - stage: Generic_2_21
# displayName: Generic 2.21
# dependsOn: []
# jobs:
# - template: templates/matrix.yml
# parameters:
# nameFormat: Python {0}
# testFormat: 2.21/generic/{0}/1
# targets:
# - test: '3.9'
# - test: '3.12'
# - test: '3.14'
# - stage: Generic_2_20
@@ -382,44 +402,33 @@ stages:
# testFormat: 2.19/generic/{0}/1
# targets:
# - test: '3.9'
# - test: '3.13'
# - stage: Generic_2_18
# displayName: Generic 2.18
# dependsOn: []
# jobs:
# - template: templates/matrix.yml
# parameters:
# nameFormat: Python {0}
# testFormat: 2.18/generic/{0}/1
# targets:
# - test: '3.8'
# - test: '3.13'
- stage: Summary
condition: succeededOrFailed()
dependsOn:
- Sanity_devel
- Sanity_2_21
- Sanity_2_20
- Sanity_2_19
- Sanity_2_18
- Units_devel
- Units_2_21
- Units_2_20
- Units_2_19
- Units_2_18
- Remote_devel_extra_vms
- Remote_devel
- Remote_2_21
- Remote_2_20
- Remote_2_19
- Remote_2_18
- Docker_devel
- Docker_2_21
- Docker_2_20
- Docker_2_19
- Docker_2_18
- Docker_community_devel
# Right now all generic tests are disabled. Uncomment when at least one of them is re-enabled.
# - Generic_devel
# - Generic_2_21
# - Generic_2_20
# - Generic_2_19
# - Generic_2_18
jobs:
- template: templates/coverage.yml

View File

@@ -11,8 +11,7 @@ Keep in mind that Azure Pipelines does not enforce unique job display names (onl
It is up to pipeline authors to avoid name collisions when deviating from the recommended format.
"""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from __future__ import annotations
import os
import re
@@ -24,12 +23,12 @@ def main():
"""Main program entry point."""
source_directory = sys.argv[1]
if '/ansible_collections/' in os.getcwd():
if "/ansible_collections/" in os.getcwd():
output_path = "tests/output"
else:
output_path = "test/results"
destination_directory = os.path.join(output_path, 'coverage')
destination_directory = os.path.join(output_path, "coverage")
if not os.path.exists(destination_directory):
os.makedirs(destination_directory)
@@ -38,27 +37,27 @@ def main():
count = 0
for name in os.listdir(source_directory):
match = re.search('^Coverage (?P<attempt>[0-9]+) (?P<label>.+)$', name)
label = match.group('label')
attempt = int(match.group('attempt'))
match = re.search("^Coverage (?P<attempt>[0-9]+) (?P<label>.+)$", name)
label = match.group("label")
attempt = int(match.group("attempt"))
jobs[label] = max(attempt, jobs.get(label, 0))
for label, attempt in jobs.items():
name = 'Coverage {attempt} {label}'.format(label=label, attempt=attempt)
name = f"Coverage {attempt} {label}"
source = os.path.join(source_directory, name)
source_files = os.listdir(source)
for source_file in source_files:
source_path = os.path.join(source, source_file)
destination_path = os.path.join(destination_directory, source_file + '.' + label)
print('"%s" -> "%s"' % (source_path, destination_path))
destination_path = os.path.join(destination_directory, source_file + "." + label)
print(f'"{source_path}" -> "{destination_path}"')
shutil.copyfile(source_path, destination_path)
count += 1
print('Coverage file count: %d' % count)
print('##vso[task.setVariable variable=coverageFileCount]%d' % count)
print('##vso[task.setVariable variable=outputPath]%s' % output_path)
print(f"Coverage file count: {count}")
print(f"##vso[task.setVariable variable=coverageFileCount]{count}")
print(f"##vso[task.setVariable variable=outputPath]{output_path}")
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -15,7 +15,6 @@ import pathlib
import shutil
import subprocess
import tempfile
import typing as t
import urllib.request
@@ -23,7 +22,7 @@ import urllib.request
class CoverageFile:
name: str
path: pathlib.Path
flags: t.List[str]
flags: list[str]
@dataclasses.dataclass(frozen=True)
@@ -34,8 +33,8 @@ class Args:
def parse_args() -> Args:
parser = argparse.ArgumentParser()
parser.add_argument('-n', '--dry-run', action='store_true')
parser.add_argument('path', type=pathlib.Path)
parser.add_argument("-n", "--dry-run", action="store_true")
parser.add_argument("path", type=pathlib.Path)
args = parser.parse_args()
@@ -46,32 +45,36 @@ def parse_args() -> Args:
return Args(**kwargs)
def process_files(directory: pathlib.Path) -> t.Tuple[CoverageFile, ...]:
def process_files(directory: pathlib.Path) -> tuple[CoverageFile, ...]:
processed = []
for file in directory.joinpath('reports').glob('coverage*.xml'):
name = file.stem.replace('coverage=', '')
for file in directory.joinpath("reports").glob("coverage*.xml"):
name = file.stem.replace("coverage=", "")
# Get flags from name
flags = name.replace('-powershell', '').split('=') # Drop '-powershell' suffix
flags = [flag if not flag.startswith('stub') else flag.split('-')[0] for flag in flags] # Remove "-01" from stub files
flags = name.replace("-powershell", "").split("=") # Drop '-powershell' suffix
flags = [
flag if not flag.startswith("stub") else flag.split("-")[0] for flag in flags
] # Remove "-01" from stub files
processed.append(CoverageFile(name, file, flags))
return tuple(processed)
def upload_files(codecov_bin: pathlib.Path, files: t.Tuple[CoverageFile, ...], dry_run: bool = False) -> None:
def upload_files(codecov_bin: pathlib.Path, files: tuple[CoverageFile, ...], dry_run: bool = False) -> None:
for file in files:
cmd = [
str(codecov_bin),
'--name', file.name,
'--file', str(file.path),
"--name",
file.name,
"--file",
str(file.path),
]
for flag in file.flags:
cmd.extend(['--flags', flag])
cmd.extend(["--flags", flag])
if dry_run:
print(f'DRY-RUN: Would run command: {cmd}')
print(f"DRY-RUN: Would run command: {cmd}")
continue
subprocess.run(cmd, check=True)
@@ -79,11 +82,11 @@ def upload_files(codecov_bin: pathlib.Path, files: t.Tuple[CoverageFile, ...], d
def download_file(url: str, dest: pathlib.Path, flags: int, dry_run: bool = False) -> None:
if dry_run:
print(f'DRY-RUN: Would download {url} to {dest} and set mode to {flags:o}')
print(f"DRY-RUN: Would download {url} to {dest} and set mode to {flags:o}")
return
with urllib.request.urlopen(url) as resp:
with dest.open('w+b') as f:
with dest.open("w+b") as f:
# Read data in chunks rather than all at once
shutil.copyfileobj(resp, f, 64 * 1024)
@@ -92,14 +95,14 @@ def download_file(url: str, dest: pathlib.Path, flags: int, dry_run: bool = Fals
def main():
args = parse_args()
url = 'https://ansible-ci-files.s3.amazonaws.com/codecov/linux/codecov'
with tempfile.TemporaryDirectory(prefix='codecov-') as tmpdir:
codecov_bin = pathlib.Path(tmpdir) / 'codecov'
url = "https://ansible-ci-files.s3.amazonaws.com/codecov/linux/codecov"
with tempfile.TemporaryDirectory(prefix="codecov-") as tmpdir:
codecov_bin = pathlib.Path(tmpdir) / "codecov"
download_file(url, codecov_bin, 0o755, args.dry_run)
files = process_files(args.path)
upload_files(codecov_bin, files, args.dry_run)
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -5,8 +5,7 @@
"""Prepends a relative timestamp to each input line from stdin and writes it to stdout."""
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from __future__ import annotations
import sys
import time
@@ -16,14 +15,14 @@ def main():
"""Main program entry point."""
start = time.time()
sys.stdin.reconfigure(errors='surrogateescape')
sys.stdout.reconfigure(errors='surrogateescape')
sys.stdin.reconfigure(errors="surrogateescape")
sys.stdout.reconfigure(errors="surrogateescape")
for line in sys.stdin:
seconds = time.time() - start
sys.stdout.write('%02d:%02d %s' % (seconds // 60, seconds % 60, line))
seconds = int(time.time() - start)
sys.stdout.write(f"{seconds // 60:02}:{seconds % 60:02} {line}")
sys.stdout.flush()
if __name__ == '__main__':
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,34 @@
{
"name": "community.general devcontainer",
"image": "mcr.microsoft.com/devcontainers/python:3.14-bookworm",
"features": {
"ghcr.io/devcontainers/features/docker-in-docker:2": {}
},
"customizations": {
"vscode": {
"settings": {
"terminal.integrated.shell.linux": "/bin/bash",
"python.pythonPath": "/usr/local/bin/python",
"editor.defaultFormatter": "charliermarsh.ruff",
"editor.formatOnSave": true,
"files.autoSave": "afterDelay",
"files.eol": "\n",
"files.insertFinalNewline": true,
"files.trimFinalNewlines": true,
"files.trimTrailingWhitespace": true
},
"extensions": [
"charliermarsh.ruff",
"ms-python.python",
"ms-python.vscode-pylance",
"redhat.ansible",
"redhat.vscode-yaml",
"trond-snekvik.simple-rst"
]
}
},
"remoteUser": "vscode",
"postCreateCommand": ".devcontainer/setup.sh",
"workspaceFolder": "/workspace/ansible_collections/community/general",
"workspaceMount": "source=${localWorkspaceFolder},target=/workspace/ansible_collections/community/general,type=bind"
}

View File

@@ -0,0 +1,3 @@
GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https: //www.gnu.org/licenses/gpl-3.0.txt)
SPDX-License-Identifier: GPL-3.0-or-later
SPDX-FileCopyrightText: 2025 Alexei Znamensky <russoz@gmail.com>

View File

@@ -1,9 +1,10 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Alexei Znamensky <russoz@gmail.com>
- name: delete backports.lzma
pip:
name: backports.lzma
state: absent
nox
ruff
antsibull-nox
pre-commit
ansible-core
andebox

17
.devcontainer/setup.sh Executable file
View File

@@ -0,0 +1,17 @@
#!/usr/bin/env bash
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
set -x
sudo chown -R vscode:vscode /workspace/
pip install -U pip
pip install -r .devcontainer/requirements-dev.txt
pip install -r tests/unit/requirements.txt
export ANSIBLE_COLLECTIONS_PATH=/workspace:${ANSIBLE_COLLECTIONS_PATH}
ansible-galaxy collection install -v -r tests/unit/requirements.yml
ansible-galaxy collection install -v -r tests/integration/requirements.yml
pre-commit install

View File

@@ -7,3 +7,9 @@ d032de3b16eed11ea3a31cd3d96d78f7c46a2ee0
e8f965fbf8154ea177c6622da149f2ae8533bd3c
e938ca5f20651abc160ee6aba10014013d04dcc1
eaa5e07b2866e05b6c7b5628ca92e9cb1142d008
# Code reformatting
340ff8586d4f1cb6a0f3c934eb42589bcc29c0ea
e530d2906a1f61df89861286ac57c951a247f32c
b769b0bc01520d12699d3911e1fc290b813cde40
dd9c86dfc094131f223ffb59e5a3d9f2dfc5875d

112
.github/BOTMETA.yml vendored
View File

@@ -65,6 +65,9 @@ files:
$callbacks/log_plays.py: {}
$callbacks/loganalytics.py:
maintainers: zhcli
$callbacks/loganalytics_ingestion.py:
ignore: zhcli
maintainers: pboushy vsh47 wtcline-intc
$callbacks/logdna.py: {}
$callbacks/logentries.py: {}
$callbacks/logstash.py:
@@ -99,7 +102,6 @@ files:
$callbacks/unixy.py:
labels: unixy
maintainers: akatch
$callbacks/yaml.py: {}
$connections/:
labels: connections
$connections/chroot.py: {}
@@ -134,6 +136,8 @@ files:
$doc_fragments/hwc.py:
labels: hwc
maintainers: $team_huawei
$doc_fragments/_icinga2_api.py:
maintainers: cfiehe
$doc_fragments/nomad.py:
maintainers: chris93111 apecnascimento
$doc_fragments/pipx.py:
@@ -217,6 +221,10 @@ files:
maintainers: resmo
$filters/to_time_unit.yml:
maintainers: resmo
$filters/to_toml.py:
maintainers: milliams
$filters/to_toml.yml:
maintainers: milliams
$filters/to_weeks.yml:
maintainers: resmo
$filters/to_yaml.py:
@@ -239,6 +247,9 @@ files:
maintainers: vbotka
$inventories/icinga2.py:
maintainers: BongoEADGC6
$inventories/incus.py:
labels: incus
maintainers: stgraber
$inventories/linode.py:
keywords: linode dynamic inventory script
labels: cloud linode
@@ -301,7 +312,7 @@ files:
$lookups/lmdb_kv.py:
maintainers: jpmens
$lookups/merge_variables.py:
maintainers: rlenferink m-a-r-k-e alpex8
maintainers: rlenferink m-a-r-k-e alpex8 cfiehe
$lookups/onepass:
labels: onepassword
maintainers: samdoran
@@ -356,6 +367,8 @@ files:
keywords: cloud huawei hwc
labels: huawei hwc_utils networking
maintainers: $team_huawei
$module_utils/_icinga2.py:
maintainers: cfiehe
$module_utils/identity/keycloak/keycloak.py:
maintainers: $team_keycloak
$module_utils/identity/keycloak/keycloak_clientsecret.py:
@@ -366,6 +379,13 @@ files:
$module_utils/jenkins.py:
labels: jenkins
maintainers: russoz
$module_utils/_crypt.py:
maintainers: russoz
$module_utils/_lxc.py:
maintainers: russoz
$module_utils/_lvm.py:
labels: lvm
maintainers: russoz
$module_utils/manageiq.py:
labels: manageiq
maintainers: $team_manageiq
@@ -394,9 +414,6 @@ files:
$module_utils/puppet.py:
labels: puppet
maintainers: russoz
$module_utils/pure.py:
labels: pure pure_storage
maintainers: $team_purestorage
$module_utils/redfish_utils.py:
labels: redfish_utils
maintainers: $team_redfish
@@ -480,8 +497,6 @@ files:
keywords: beadm dladm illumos ipadm nexenta omnios openindiana pfexec smartos solaris sunos zfs zpool
labels: beadm solaris
maintainers: $team_solaris
$modules/bearychat.py:
maintainers: tonyseek
$modules/bigpanda.py:
ignore: hkariti
$modules/bitbucket_:
@@ -587,9 +602,6 @@ files:
$modules/etcd3.py:
ignore: vfauth
maintainers: evrardjp
$modules/facter.py:
labels: facter
maintainers: $team_ansible_core gamethis
$modules/facter_facts.py:
labels: facter
maintainers: russoz $team_ansible_core gamethis
@@ -598,6 +610,8 @@ files:
$modules/filesystem.py:
labels: filesystem
maintainers: pilou- abulimov quidame
$modules/file_remove.py:
maintainers: shahargolshani
$modules/flatpak.py:
maintainers: $team_flatpak
$modules/flatpak_remote.py:
@@ -633,6 +647,10 @@ files:
maintainers: adrianmoisey
$modules/github_repo.py:
maintainers: atorrescogollo
$modules/github_secrets.py:
maintainers: konstruktoid
$modules/github_secrets_info.py:
maintainers: konstruktoid
$modules/gitlab_:
keywords: gitlab source_control
maintainers: $team_gitlab
@@ -648,10 +666,10 @@ files:
maintainers: zvaraondrej
$modules/gitlab_milestone.py:
maintainers: gpongelli
$modules/gitlab_project_variable.py:
maintainers: markuman
$modules/gitlab_instance_variable.py:
maintainers: benibr
$modules/gitlab_project_variable.py:
maintainers: markuman
$modules/gitlab_runner.py:
maintainers: SamyCoenen
$modules/gitlab_user.py:
@@ -711,6 +729,8 @@ files:
maintainers: $team_huawei huaweicloud
$modules/ibm_sa_:
maintainers: tzure
$modules/icinga2_downtime.py:
maintainers: cfiehe
$modules/icinga2_feature.py:
maintainers: nerzhul
$modules/icinga2_host.py:
@@ -749,6 +769,8 @@ files:
maintainers: obourdon hryamzik
$modules/ip_netns.py:
maintainers: bregman-arie
$modules/ip2location_info.py:
maintainers: ip2location
$modules/ipa_:
maintainers: $team_ipa
ignore: fxfitz
@@ -756,14 +778,14 @@ files:
maintainers: abakanovskii
$modules/ipa_dnsrecord.py:
maintainers: $team_ipa jwbernin
$modules/ipbase_info.py:
maintainers: dominikkukacka
$modules/ipa_pwpolicy.py:
maintainers: adralioh
$modules/ipa_service.py:
maintainers: cprh
$modules/ipa_vault.py:
maintainers: jparrill
$modules/ipbase_info.py:
maintainers: dominikkukacka
$modules/ipify_facts.py:
maintainers: resmo
$modules/ipinfoio_facts.py:
@@ -813,6 +835,8 @@ files:
maintainers: Slezhuk pertoft
$modules/kdeconfig.py:
maintainers: smeso
$modules/kea_command.py:
maintainers: mirabilos
$modules/kernel_blacklist.py:
maintainers: matze
$modules/keycloak_:
@@ -821,16 +845,22 @@ files:
maintainers: elfelip Gaetan2907
$modules/keycloak_authentication_required_actions.py:
maintainers: Skrekulko
$modules/keycloak_authentication_v2.py:
maintainers: thomasbargetz
$modules/keycloak_authz_authorization_scope.py:
maintainers: mattock
$modules/keycloak_authz_permission.py:
maintainers: mattock
$modules/keycloak_authz_custom_policy.py:
maintainers: mattock
$modules/keycloak_authz_permission.py:
maintainers: mattock
$modules/keycloak_authz_permission_info.py:
maintainers: mattock
$modules/keycloak_client.py:
maintainers: koke1997
$modules/keycloak_client_rolemapping.py:
maintainers: Gaetan2907
$modules/keycloak_client_rolescope.py:
maintainers: desand01
$modules/keycloak_clientscope.py:
maintainers: Gaetan2907
$modules/keycloak_clientscope_type.py:
@@ -841,6 +871,8 @@ files:
maintainers: fynncfchen johncant
$modules/keycloak_component.py:
maintainers: fivetide
$modules/keycloak_component_info.py:
maintainers: desand01
$modules/keycloak_group.py:
maintainers: adamgoossens
$modules/keycloak_identity_provider.py:
@@ -850,23 +882,23 @@ files:
$modules/keycloak_realm_info.py:
maintainers: fynncfchen
$modules/keycloak_realm_key.py:
maintainers: mattock
maintainers: mattock koke1997
$modules/keycloak_realm_localization.py:
maintainers: danekja
$modules/keycloak_realm_rolemapping.py:
maintainers: agross mhuysamen Gaetan2907
$modules/keycloak_role.py:
maintainers: laurpaum
$modules/keycloak_user.py:
maintainers: elfelip
$modules/keycloak_user_execute_actions_email.py:
maintainers: mariusbertram
$modules/keycloak_user_federation.py:
maintainers: laurpaum
$modules/keycloak_user_rolemapping.py:
maintainers: bratwurzt koke1997
$modules/keycloak_userprofile.py:
maintainers: yeoldegrove
$modules/keycloak_component_info.py:
maintainers: desand01
$modules/keycloak_client_rolescope.py:
maintainers: desand01
$modules/keycloak_user_rolemapping.py:
maintainers: bratwurzt
$modules/keycloak_realm_rolemapping.py:
maintainers: agross mhuysamen Gaetan2907
$modules/keyring.py:
maintainers: ahussey-redhat
$modules/keyring_info.py:
@@ -909,6 +941,8 @@ files:
labels: logentries
$modules/logentries_msg.py:
maintainers: jcftang
$modules/logrotate.py:
maintainers: a-gabidullin
$modules/logstash_plugin.py:
maintainers: nerzhul
$modules/lvg.py:
@@ -931,6 +965,10 @@ files:
maintainers: conloos
$modules/lxd_project.py:
maintainers: we10710aa
$modules/lxd_storage_pool_info.py:
maintainers: smcavoy
$modules/lxd_storage_volume_info.py:
maintainers: smcavoy
$modules/macports.py:
ignore: ryansb
keywords: brew cask darwin homebrew macosx macports osx
@@ -1307,8 +1345,11 @@ files:
$modules/snap_alias.py:
labels: snap
maintainers: russoz
$modules/snap_connect.py:
labels: snap
maintainers: russoz
$modules/snmp_facts.py:
maintainers: ogenstad ujwalkomarla
maintainers: ogenstad ujwalkomarla lalten
$modules/solaris_zone.py:
keywords: beadm dladm illumos ipadm nexenta omnios openindiana pfexec smartos solaris sunos zfs zpool
labels: solaris
@@ -1325,6 +1366,8 @@ files:
maintainers: farhan7500 gautamphegde
$modules/ssh_config.py:
maintainers: gaqzi Akasurde
$modules/sssd_info.py:
maintainers: a-gabidullin
$modules/stacki_host.py:
labels: stacki_host
maintainers: bsanders bbyhuy
@@ -1481,22 +1524,24 @@ files:
ignore: matze
labels: zypper
maintainers: $team_suse
$plugin_utils/ansible_type.py:
maintainers: vbotka
$modules/zypper_repository_info.py:
labels: zypper
maintainers: $team_suse TobiasZeuch181
$plugin_utils/ansible_type.py:
maintainers: vbotka
$plugin_utils/keys_filter.py:
maintainers: vbotka
$plugin_utils/unsafe.py:
maintainers: felixfontein
$plugin_utils/_tags.py:
maintainers: felixfontein
$tests/a_module.py:
maintainers: felixfontein
$tests/ansible_type.py:
maintainers: vbotka
$tests/fqdn_valid.py:
maintainers: vbotka
#########################
#########################
docs/docsite/rst/filter_guide.rst: {}
docs/docsite/rst/filter_guide_abstract_informations.rst: {}
docs/docsite/rst/filter_guide_abstract_informations_counting_elements_in_sequence.rst:
@@ -1535,6 +1580,8 @@ files:
maintainers: russoz
docs/docsite/rst/guide_deps.rst:
maintainers: russoz
docs/docsite/rst/guide_ee.rst:
maintainers: russoz
docs/docsite/rst/guide_iocage.rst:
maintainers: russoz felixfontein
docs/docsite/rst/guide_iocage_inventory.rst:
@@ -1565,7 +1612,7 @@ files:
maintainers: russoz
docs/docsite/rst/test_guide.rst:
maintainers: felixfontein
#########################
#########################
tests/:
labels: tests
tests/integration:
@@ -1592,7 +1639,7 @@ macros:
plugin_utils: plugins/plugin_utils
tests: plugins/test
team_ansible_core:
team_aix: MorrisA bcoca d-little flynn1973 gforster kairoaraujo marvin-sinister mator molekuul ramooncamacho wtcross
team_aix: MorrisA bcoca d-little flynn1973 gforster kairoaraujo marvin-sinister molekuul ramooncamacho wtcross
team_bsd: JoergFiedler MacLemon bcoca dch jasperla mekanix opoplawski overhacked tuxillo
team_consul: sgargan apollo13 Ilgmi
team_cyberark_conjur: jvanderhoof ryanprior
@@ -1610,11 +1657,10 @@ macros:
team_networking: NilashishC Qalthos danielmellado ganeshrn justjais trishnaguha sganesh-infoblox privateip
team_opennebula: ilicmilan meerkampdvv rsmontero xorel nilsding
team_oracle: manojmeda mross22 nalsaber
team_purestorage: bannaych dnix101 genegr lionmax opslounge raekins sdodsley sile16
team_redfish: mraineri tomasg2012 xmadsen renxulei rajeevkallur bhavya06 jyundt
team_rhsm: cnsnyder ptoscano
team_scaleway: remyleone abarbare
team_solaris: bcoca fishman jasperla jpdasma mator scathatheworm troy2914 xen0l
team_solaris: bcoca fishman jasperla jpdasma scathatheworm troy2914 xen0l
team_suse: commel evrardjp lrupp AnderEnder alxgu andytom sealor
team_virt: joshainglis karmab Thulium-Drake Ajpantuso
team_wdc: mikemoerk

View File

@@ -29,8 +29,8 @@ jobs:
strategy:
matrix:
ansible:
- '2.16'
- '2.17'
- '2.18'
runs-on: ubuntu-latest
steps:
- name: Perform sanity testing
@@ -58,18 +58,18 @@ jobs:
exclude:
- ansible: ''
include:
- ansible: '2.16'
python: '2.7'
- ansible: '2.16'
python: '3.6'
- ansible: '2.16'
python: '3.11'
- ansible: '2.17'
python: '3.7'
- ansible: '2.17'
python: '3.10'
- ansible: '2.17'
python: '3.12'
- ansible: '2.18'
python: '3.8'
- ansible: '2.18'
python: '3.11'
- ansible: '2.18'
python: '3.13'
steps:
- name: >-
@@ -105,44 +105,6 @@ jobs:
exclude:
- ansible: ''
include:
# 2.16
# CentOS 7 does not work in GHA, that's why it's not listed here.
- ansible: '2.16'
docker: fedora38
python: ''
target: azp/posix/1/
- ansible: '2.16'
docker: fedora38
python: ''
target: azp/posix/2/
- ansible: '2.16'
docker: fedora38
python: ''
target: azp/posix/3/
- ansible: '2.16'
docker: opensuse15
python: ''
target: azp/posix/1/
- ansible: '2.16'
docker: opensuse15
python: ''
target: azp/posix/2/
- ansible: '2.16'
docker: opensuse15
python: ''
target: azp/posix/3/
- ansible: '2.16'
docker: alpine3
python: ''
target: azp/posix/1/
- ansible: '2.16'
docker: alpine3
python: ''
target: azp/posix/2/
- ansible: '2.16'
docker: alpine3
python: ''
target: azp/posix/3/
# 2.17
- ansible: '2.17'
docker: fedora39
@@ -156,18 +118,6 @@ jobs:
docker: fedora39
python: ''
target: azp/posix/3/
- ansible: '2.17'
docker: alpine319
python: ''
target: azp/posix/1/
- ansible: '2.17'
docker: alpine319
python: ''
target: azp/posix/2/
- ansible: '2.17'
docker: alpine319
python: ''
target: azp/posix/3/
- ansible: '2.17'
docker: ubuntu2004
python: ''
@@ -180,6 +130,73 @@ jobs:
docker: ubuntu2004
python: ''
target: azp/posix/3/
- ansible: '2.17'
docker: alpine319
python: ''
target: azp/posix/1/
- ansible: '2.17'
docker: alpine319
python: ''
target: azp/posix/2/
- ansible: '2.17'
docker: alpine319
python: ''
target: azp/posix/3/
# Right now all generic tests are disabled. Uncomment when at least one of them is re-enabled.
# - ansible: '2.17'
# docker: default
# python: '3.7'
# target: azp/generic/1/
# - ansible: '2.17'
# docker: default
# python: '3.12'
# target: azp/generic/1/
# 2.18
- ansible: '2.18'
docker: fedora40
python: ''
target: azp/posix/1/
- ansible: '2.18'
docker: fedora40
python: ''
target: azp/posix/2/
- ansible: '2.18'
docker: fedora40
python: ''
target: azp/posix/3/
- ansible: '2.18'
docker: ubuntu2404
python: ''
target: azp/posix/1/
- ansible: '2.18'
docker: ubuntu2404
python: ''
target: azp/posix/2/
- ansible: '2.18'
docker: ubuntu2404
python: ''
target: azp/posix/3/
- ansible: '2.18'
docker: alpine320
python: ''
target: azp/posix/1/
- ansible: '2.18'
docker: alpine320
python: ''
target: azp/posix/2/
- ansible: '2.18'
docker: alpine320
python: ''
target: azp/posix/3/
# Right now all generic tests are disabled. Uncomment when at least one of them is re-enabled.
# - ansible: '2.18'
# docker: default
# python: '3.8'
# target: azp/generic/1/
# - ansible: '2.18'
# docker: default
# python: '3.13'
# target: azp/generic/1/
steps:
- name: >-

34
.github/workflows/docs.yml vendored Normal file
View File

@@ -0,0 +1,34 @@
---
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
name: nox
'on':
push:
branches:
- main
- stable-*
paths:
- docs/**
pull_request:
paths:
- docs/**
# Run CI once per day (at 08:00 UTC)
schedule:
- cron: '0 8 * * *'
workflow_dispatch:
jobs:
nox:
runs-on: ubuntu-latest
name: "Validate generated Ansible output"
steps:
- name: Check out collection
uses: actions/checkout@v6
with:
persist-credentials: false
- name: Run nox
uses: ansible-community/antsibull-nox@main
with:
sessions: ansible-output

242
.mypy.ini Normal file
View File

@@ -0,0 +1,242 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
[mypy]
# check_untyped_defs = True
# disallow_untyped_defs = True
# strict = True -- only try to enable once everything (including dependencies!) is typed
strict_equality = True
strict_bytes = True
warn_redundant_casts = True
# warn_return_any = True
warn_unreachable = True
exclude = tests/integration/targets/django_.*/files/.*
[mypy-ansible.*]
# ansible-core has partial typing information
follow_untyped_imports = True
# The following imports are Python packages that:
# 1. We do not install (we can't install everything!);
# 2. That have type stubs, but we don't install them (again, we can't install everything!); or
# 3. That have no types and type stubs.
[mypy-aerospike.*]
ignore_missing_imports = True
[mypy-antsibull_nox.*]
ignore_missing_imports = True
[mypy-asyncore.*]
ignore_missing_imports = True
[mypy-boto3.*]
ignore_missing_imports = True
[mypy-bs4.*]
ignore_missing_imports = True
[mypy-cgi.*]
ignore_missing_imports = True
[mypy-chef.*]
ignore_missing_imports = True
[mypy-consul.*]
ignore_missing_imports = True
[mypy-credstash.*]
ignore_missing_imports = True
[mypy-crypt.*]
ignore_missing_imports = True
[mypy-daemon.*]
ignore_missing_imports = True
[mypy-datadog.*]
ignore_missing_imports = True
[mypy-dbus.*]
ignore_missing_imports = True
[mypy-delinea.*]
ignore_missing_imports = True
[mypy-dnf.*]
ignore_missing_imports = True
[mypy-dnsimple.*]
ignore_missing_imports = True
[mypy-etcd3.*]
ignore_missing_imports = True
[mypy-flatdict.*]
ignore_missing_imports = True
[mypy-footmark.*]
ignore_missing_imports = True
[mypy-fqdn.*]
ignore_missing_imports = True
[mypy-func.*]
ignore_missing_imports = True
[mypy-gi.*]
ignore_missing_imports = True
[mypy-github3.*]
ignore_missing_imports = True
[mypy-gssapi.*]
ignore_missing_imports = True
[mypy-hashids.*]
ignore_missing_imports = True
[mypy-heroku3.*]
ignore_missing_imports = True
[mypy-hpe3parclient.*]
ignore_missing_imports = True
[mypy-hpe3par_sdk.*]
ignore_missing_imports = True
[mypy-hpilo.*]
ignore_missing_imports = True
[mypy-hpOneView.*]
ignore_missing_imports = True
[mypy-httmock.*] # TODO!
ignore_missing_imports = True
[mypy-influxdb.*]
ignore_missing_imports = True
[mypy-jc.*]
ignore_missing_imports = True
[mypy-jenkins.*]
ignore_missing_imports = True
[mypy-jmespath.*]
ignore_missing_imports = True
[mypy-jsonpatch.*]
ignore_missing_imports = True
[mypy-kazoo.*]
ignore_missing_imports = True
[mypy-keyring.*]
ignore_missing_imports = True
[mypy-keystoneauth1.*]
ignore_missing_imports = True
[mypy-layman.*]
ignore_missing_imports = True
[mypy-ldap.*]
ignore_missing_imports = True
[mypy-legacycrypt.*]
ignore_missing_imports = True
[mypy-libcloud.*]
ignore_missing_imports = True
[mypy-linode.*]
ignore_missing_imports = True
[mypy-linode_api4.*]
ignore_missing_imports = True
[mypy-lmdb.*]
ignore_missing_imports = True
[mypy-logdna.*]
ignore_missing_imports = True
[mypy-logstash.*]
ignore_missing_imports = True
[mypy-lxc.*]
ignore_missing_imports = True
[mypy-manageiq_client.*]
ignore_missing_imports = True
[mypy-matrix_client.*]
ignore_missing_imports = True
[mypy-memcache.*]
ignore_missing_imports = True
[mypy-nc_dnsapi.*]
ignore_missing_imports = True
[mypy-nomad.*]
ignore_missing_imports = True
[mypy-nopackagewiththisname.*]
ignore_missing_imports = True
[mypy-nox.*]
ignore_missing_imports = True
[mypy-oci.*]
ignore_missing_imports = True
[mypy-oneandone.*]
ignore_missing_imports = True
[mypy-opentelemetry.*]
ignore_missing_imports = True
[mypy-ovh.*]
ignore_missing_imports = True
[mypy-ovirtsdk.*]
ignore_missing_imports = True
[mypy-packet.*]
ignore_missing_imports = True
[mypy-paho.*]
ignore_missing_imports = True
[mypy-pam.*]
ignore_missing_imports = True
[mypy-pdpyras.*]
ignore_missing_imports = True
[mypy-petname.*]
ignore_missing_imports = True
[mypy-pingdom.*]
ignore_missing_imports = True
[mypy-pkg_resources.*]
ignore_missing_imports = True
[mypy-portage.*]
ignore_missing_imports = True
[mypy-potatoes_that_will_never_be_there.*]
ignore_missing_imports = True
[mypy-prettytable.*]
ignore_missing_imports = True
[mypy-pubnub_blocks_client.*]
ignore_missing_imports = True
[mypy-pushbullet.*]
ignore_missing_imports = True
[mypy-pycdlib.*]
ignore_missing_imports = True
[mypy-pyghmi.*]
ignore_missing_imports = True
[mypy-pylxca.*]
ignore_missing_imports = True
[mypy-pymssql.*]
ignore_missing_imports = True
[mypy-pyodbc.*]
ignore_missing_imports = True
[mypy-pyone.*]
ignore_missing_imports = True
[mypy-pypureomapi.*]
ignore_missing_imports = True
[mypy-pysnmp.*]
ignore_missing_imports = True
[mypy-pyxcli.*]
ignore_missing_imports = True
[mypy-rpm.*]
ignore_missing_imports = True
[mypy-ruamel.yaml.*]
ignore_missing_imports = True
[mypy-salt.*]
ignore_missing_imports = True
[mypy-selinux.*]
ignore_missing_imports = True
[mypy-semantic_version.*]
ignore_missing_imports = True
[mypy-sendgrid.*]
ignore_missing_imports = True
[mypy-seobject.*]
ignore_missing_imports = True
[mypy-sha.*]
ignore_missing_imports = True
[mypy-smtpd.*]
ignore_missing_imports = True
[mypy-smtpd_tls.*]
ignore_missing_imports = True
[mypy-SoftLayer.*]
ignore_missing_imports = True
[mypy-spotinst_sdk.*]
ignore_missing_imports = True
[mypy-statsd.*]
ignore_missing_imports = True
[mypy-storops.*]
ignore_missing_imports = True
[mypy-taiga.*]
ignore_missing_imports = True
[mypy-thycotic.*]
ignore_missing_imports = True
[mypy-tomlkit.*]
ignore_missing_imports = True
[mypy-univention.*]
ignore_missing_imports = True
[mypy-vexatapi.*]
ignore_missing_imports = True
[mypy-voluptuous.*]
ignore_missing_imports = True
[mypy-websocket.*]
ignore_missing_imports = True
[mypy-XenAPI.*]
ignore_missing_imports = True
[mypy-xkcdpass.*]
ignore_missing_imports = True
[mypy-xmljson.*]
ignore_missing_imports = True
[mypy-xmltodict.*]
ignore_missing_imports = True
[mypy-xmpp.*]
ignore_missing_imports = True

13
.pre-commit-config.yaml Normal file
View File

@@ -0,0 +1,13 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2025 Alexei Znamensky <russoz@gmail.com>
repos:
- repo: https://github.com/astral-sh/ruff-pre-commit
# Ruff version.
rev: v0.15.9
hooks:
# Run the linter.
- id: ruff-check
# Run the formatter.
- id: ruff-format

File diff suppressed because one or more lines are too long

File diff suppressed because it is too large Load Diff

View File

@@ -39,7 +39,7 @@ Please read our ['Contributing to collections'](https://docs.ansible.com/project
* Make sure your PR includes a [changelog fragment](https://docs.ansible.com/projects/ansible/devel/community/collection_development_process.html#creating-a-changelog-fragment).
* You must not include a fragment for new modules or new plugins. Also you shouldn't include one for docs-only changes. (If you're not sure, simply don't include one, we'll tell you whether one is needed or not :) )
* Please always include a link to the pull request itself, and if the PR is about an issue, also a link to the issue. Also make sure the fragment ends with a period, and begins with a lower-case letter after `-`. (Again, if you don't do this, we'll add suggestions to fix it, so don't worry too much :) )
* Avoid reformatting unrelated parts of the codebase in your PR. These types of changes will likely be requested for reversion, create additional work for reviewers, and may cause approval to be delayed.
* Note that we format the code with `ruff format`. If your change does not match the formatters expectations, CI will fail and your PR will not get merged. See below for how to format code with antsibull-nox.
You can also read the Ansible community's [Quick-start development guide](https://docs.ansible.com/projects/ansible/devel/community/create_pr_quick_start.html).
@@ -49,11 +49,24 @@ If you want to test a PR locally, refer to [our testing guide](https://docs.ansi
If you find any inconsistencies or places in this document which can be improved, feel free to raise an issue or pull request to fix it.
## Run sanity or unit locally (with antsibull-nox)
## Format code; and run sanity or unit tests locally (with antsibull-nox)
The easiest way to run sanity and unit tests locally is to use [antsibull-nox](https://docs.ansible.com/projects/antsibull-nox/).
The easiest way to format the code, and to run sanity and unit tests locally is to use [antsibull-nox](https://docs.ansible.com/projects/antsibull-nox/).
(If you have [nox](https://nox.thea.codes/en/stable/) installed, it will automatically install antsibull-nox in a virtual environment for you.)
### Format code
The following commands show how to run ruff format:
```.bash
# Run all configured formatters:
nox -Re formatters
# If you notice discrepancies between your local formatter and CI, you might
# need to re-generate the virtual environment:
nox -e formatters
```
### Sanity tests
The following commands show how to run ansible-test sanity tests:
@@ -120,6 +133,7 @@ ansible-test sanity --docker -v plugins/modules/system/pids.py tests/integration
Note that for running unit tests, you need to install required collections in the same folder structure that `community.general` is checked out in.
Right now, you need to install [`community.internal_test_tools`](https://github.com/ansible-collections/community.internal_test_tools).
If you want to use the latest version from GitHub, you can run:
```
git clone https://github.com/ansible-collections/community.internal_test_tools.git ~/dev/ansible_collections/community/internal_test_tools
```
@@ -142,6 +156,7 @@ ansible-test units --docker -v --python 3.8 tests/unit/plugins/modules/net_tools
Note that for running integration tests, you need to install required collections in the same folder structure that `community.general` is checked out in.
Right now, depending on the test, you need to install [`ansible.posix`](https://github.com/ansible-collections/ansible.posix), [`community.crypto`](https://github.com/ansible-collections/community.crypto), and [`community.docker`](https://github.com/ansible-collections/community.docker):
If you want to use the latest versions from GitHub, you can run:
```
mkdir -p ~/dev/ansible_collections/ansible
git clone https://github.com/ansible-collections/ansible.posix.git ~/dev/ansible_collections/ansible/posix
@@ -154,11 +169,13 @@ The following commands show how to run integration tests:
#### In Docker
Integration tests on Docker have the following parameters:
- `image_name` (required): The name of the Docker image. To get the list of supported Docker images, run
`ansible-test integration --help` and look for _target docker images_.
- `test_name` (optional): The name of the integration test.
For modules, this equals the short name of the module; for example, `pacman` in case of `community.general.pacman`.
For plugins, the plugin type is added before the plugin's short name, for example `callback_yaml` for the `community.general.yaml` callback.
```.bash
# Test all plugins/modules on fedora40
ansible-test integration -v --docker fedora40
@@ -179,6 +196,31 @@ ansible-test integration -v lookup_flattened
If you are unsure about the integration test target name for a module or plugin, you can take a look in `tests/integration/targets/`. Tests for plugins have the plugin type prepended.
## Devcontainer
Since community.general 12.2.0, the project repository supports [devcontainers](https://containers.dev/). In short, it is a standard mechanism to
create a container that is then used during the development cycle. Many tools are pre-installed in the container and will be already available
to you as a developer. A number of different IDEs support that configuration, the most prominent ones being VSCode and PyCharm.
See the files under [.devcontainer](.devcontainer) for details on what is deployed inside that container.
Beware of:
- By default, the devcontainer installs the latest version of `ansible-core`.
When testing your changes locally, keep in mind that the collection must support older versions of
`ansible-core` and, depending on what is being tested, results may vary.
- Integration tests executed directly inside the devcontainer without isolation (see above) may fail if
they expected to be run in full fledged VMs. On the other hand, the devcontainer setup allows running
containers inside the container (the `docker-in-docker` feature).
- The devcontainer is built with a directory structure such that
`.../ansible_collections/community/general` contains the project repository, so `ansible-test` and
other standard tools should work without any additional setup
- By default, the devcontainer installs `pre-commit` and configures it to perform `ruff check` and
`ruff format` on the Python files, prior to commiting. That configuration is going to be used by
`git` even outside the devcontainer. To prevent errors, you have to either install `pre-commit` in
your computer, outside the devcontainer, or run `pre-commit uninstall` from within the devcontainer
before quitting it.
## Creating new modules or plugins
Creating new modules and plugins requires a bit more work than other Pull Requests.
@@ -188,7 +230,7 @@ Creating new modules and plugins requires a bit more work than other Pull Reques
2. Please do not add more than one plugin/module in one PR, especially if it is the first plugin/module you are contributing.
That makes it easier for reviewers, and increases the chance that your PR will get merged. If you plan to contribute a group
of plugins/modules (say, more than a module and a corresponding ``_info`` module), please mention that in the first PR. In
of plugins/modules (say, more than a module and a corresponding `_info` module), please mention that in the first PR. In
such cases, you also have to think whether it is better to publish the group of plugins/modules in a new collection.
3. When creating a new module or plugin, please make sure that you follow various guidelines:

View File

@@ -1,48 +0,0 @@
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
--------------------------------------------
1. This LICENSE AGREEMENT is between the Python Software Foundation
("PSF"), and the Individual or Organization ("Licensee") accessing and
otherwise using this software ("Python") in source or binary form and
its associated documentation.
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python alone or in any derivative version,
provided, however, that PSF's License Agreement and PSF's notice of copyright,
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, 2021 Python Software Foundation;
All Rights Reserved" are retained in Python alone or in any derivative version
prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python.
4. PSF is making Python available to Licensee on an "AS IS"
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF and
Licensee. This License Agreement does not grant permission to use PSF
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using Python, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.

View File

@@ -7,9 +7,9 @@ SPDX-License-Identifier: GPL-3.0-or-later
# Community General Collection
[![Documentation](https://img.shields.io/badge/docs-brightgreen.svg)](https://docs.ansible.com/projects/ansible/devel/collections/community/general/)
[![Build Status](https://dev.azure.com/ansible/community.general/_apis/build/status/CI?branchName=stable-11)](https://dev.azure.com/ansible/community.general/_build?definitionId=31)
[![EOL CI](https://github.com/ansible-collections/community.general/actions/workflows/ansible-test.yml/badge.svg?branch=stable-11)](https://github.com/ansible-collections/community.general/actions)
[![Nox CI](https://github.com/ansible-collections/community.general/actions/workflows/nox.yml/badge.svg?branch=stable-11)](https://github.com/ansible-collections/community.general/actions)
[![Build Status](https://dev.azure.com/ansible/community.general/_apis/build/status/CI?branchName=stable-12)](https://dev.azure.com/ansible/community.general/_build?definitionId=31)
[![EOL CI](https://github.com/ansible-collections/community.general/actions/workflows/ansible-test.yml/badge.svg?branch=stable-12)](https://github.com/ansible-collections/community.general/actions)
[![Nox CI](https://github.com/ansible-collections/community.general/actions/workflows/nox.yml/badge.svg?branch=stable-12)](https://github.com/ansible-collections/community.general/actions)
[![Codecov](https://img.shields.io/codecov/c/github/ansible-collections/community.general)](https://codecov.io/gh/ansible-collections/community.general)
[![REUSE status](https://api.reuse.software/badge/github.com/ansible-collections/community.general)](https://api.reuse.software/info/github.com/ansible-collections/community.general)
@@ -39,7 +39,7 @@ For more information about communication, see the [Ansible communication guide](
## Tested with Ansible
Tested with the current ansible-core 2.16, ansible-core 2.17, ansible-core 2.18, ansible-core 2.19, ansible-core 2.20 releases and the current development version of ansible-core. Ansible-core versions before 2.16.0 are not supported. This includes all ansible-base 2.10 and Ansible 2.9 releases.
Tested with the current ansible-core 2.17, ansible-core 2.18, ansible-core 2.19, ansible-core 2.20, ansible-core 2.21 releases and the current development version of ansible-core. Ansible-core versions before 2.17.0 are not supported. This includes all ansible-base 2.10 and Ansible 2.9 releases.
## External requirements
@@ -86,13 +86,13 @@ We are actively accepting new contributors.
All types of contributions are very welcome.
You don't know how to start? Refer to our [contribution guide](https://github.com/ansible-collections/community.general/blob/main/CONTRIBUTING.md)!
You don't know how to start? Refer to our [contribution guide](https://github.com/ansible-collections/community.general/blob/stable-12/CONTRIBUTING.md)!
The current maintainers are listed in the [commit-rights.md](https://github.com/ansible-collections/community.general/blob/main/commit-rights.md#people) file. If you have questions or need help, feel free to mention them in the proposals.
The current maintainers are listed in the [commit-rights.md](https://github.com/ansible-collections/community.general/blob/stable-12/commit-rights.md#people) file. If you have questions or need help, feel free to mention them in the proposals.
You can find more information in the [developer guide for collections](https://docs.ansible.com/projects/ansible/devel/dev_guide/developing_collections.html#contributing-to-collections), and in the [Ansible Community Guide](https://docs.ansible.com/projects/ansible/latest/community/index.html).
Also for some notes specific to this collection see [our CONTRIBUTING documentation](https://github.com/ansible-collections/community.general/blob/main/CONTRIBUTING.md).
Also for some notes specific to this collection see [our CONTRIBUTING documentation](https://github.com/ansible-collections/community.general/blob/stable-12/CONTRIBUTING.md).
### Running tests
@@ -102,8 +102,8 @@ See [here](https://docs.ansible.com/projects/ansible/devel/dev_guide/developing_
To learn how to maintain / become a maintainer of this collection, refer to:
* [Committer guidelines](https://github.com/ansible-collections/community.general/blob/main/commit-rights.md).
* [Maintainer guidelines](https://github.com/ansible/community-docs/blob/main/maintaining.rst).
* [Committer guidelines](https://github.com/ansible-collections/community.general/blob/stable-12/commit-rights.md).
* [Maintainer guidelines](https://github.com/ansible/community-docs/blob/stable-12/maintaining.rst).
It is necessary for maintainers of this collection to be subscribed to:
@@ -118,7 +118,7 @@ See the [Releasing guidelines](https://github.com/ansible/community-docs/blob/ma
## Release notes
See the [changelog](https://github.com/ansible-collections/community.general/blob/stable-11/CHANGELOG.md).
See the [changelog](https://github.com/ansible-collections/community.general/blob/stable-12/CHANGELOG.md).
## Roadmap
@@ -137,8 +137,8 @@ See [this issue](https://github.com/ansible-collections/community.general/issues
This collection is primarily licensed and distributed as a whole under the GNU General Public License v3.0 or later.
See [LICENSES/GPL-3.0-or-later.txt](https://github.com/ansible-collections/community.general/blob/stable-11/COPYING) for the full text.
See [LICENSES/GPL-3.0-or-later.txt](https://github.com/ansible-collections/community.general/blob/stable-12/COPYING) for the full text.
Parts of the collection are licensed under the [BSD 2-Clause license](https://github.com/ansible-collections/community.general/blob/stable-11/LICENSES/BSD-2-Clause.txt), the [MIT license](https://github.com/ansible-collections/community.general/blob/stable-11/LICENSES/MIT.txt), and the [PSF 2.0 license](https://github.com/ansible-collections/community.general/blob/stable-11/LICENSES/PSF-2.0.txt).
Parts of the collection are licensed under the [BSD 2-Clause license](https://github.com/ansible-collections/community.general/blob/stable-12/LICENSES/BSD-2-Clause.txt) and the [MIT license](https://github.com/ansible-collections/community.general/blob/stable-12/LICENSES/MIT.txt).
All files have a machine readable `SDPX-License-Identifier:` comment denoting its respective license(s) or an equivalent entry in an accompanying `.license` file. Only changelog fragments (which will not be part of a release) are covered by a blanket statement in `REUSE.toml`. This conforms to the [REUSE specification](https://reuse.software/spec/).

View File

@@ -20,15 +20,39 @@ stable_branches = [ "stable-*" ]
[sessions]
[sessions.lint]
code_files = ["."] # consider all Python files in the collection
run_isort = false
run_black = false
run_ruff_autofix = true
ruff_autofix_config = "ruff.toml"
ruff_autofix_select = [
"I",
"RUF022",
]
run_ruff_check = true
ruff_check_config = "ruff.toml"
run_ruff_format = true
ruff_format_config = "ruff.toml"
run_flake8 = false
run_pylint = false
run_yamllint = true
yamllint_config = ".yamllint"
# yamllint_config_plugins = ".yamllint-docs"
# yamllint_config_plugins_examples = ".yamllint-examples"
run_mypy = false
run_mypy = true
mypy_ansible_core_package = "ansible-core>=2.19.0"
mypy_config = ".mypy.ini"
mypy_extra_deps = [
"cryptography",
"dnspython",
"lxml-stubs",
"types-mock",
"types-paramiko",
"types-passlib",
"types-psutil",
"types-PyYAML",
"types-requests",
]
[sessions.docs_check]
validate_collection_refs="all"

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,2 @@
bugfixes:
- scaleway_image_info, scaleway_ip_info, scaleway_organization_info, scaleway_security_group_info, scaleway_server_info, scaleway_snapshot_info, scaleway_volume_info - fix ``NoneType`` error when the Scaleway API returns an empty or non-JSON response body (https://github.com/ansible-collections/community.general/issues/11361, https://github.com/ansible-collections/community.general/pull/11918).

View File

@@ -0,0 +1,2 @@
minor_changes:
- "mattermost, rocketchat, slack - update default ``icon_url`` to ansible favicon (https://github.com/ansible-collections/community.general/pull/11909)."

View File

@@ -0,0 +1,2 @@
bugfixes:
- crypttab - fix parsing of options whose value contains an equal sign (https://github.com/ansible-collections/community.general/issues/4963, https://github.com/ansible-collections/community.general/pull/11926).

View File

@@ -0,0 +1 @@
release_summary: Regular bugfix release.

View File

@@ -5,3 +5,13 @@
changelog:
write_changelog: true
ansible_output:
global_env:
ANSIBLE_STDOUT_CALLBACK: community.general.tasks_only
ANSIBLE_COLLECTIONS_TASKS_ONLY_NUMBER_OF_COLUMNS: 90
global_postprocessors:
reformat-yaml:
command:
- python
- docs/docsite/reformat-yaml.py

View File

@@ -8,6 +8,9 @@ sections:
toctree:
- filter_guide
- test_guide
- title: Deployment Guides
toctree:
- guide_ee
- title: Technology Guides
toctree:
- guide_alicloud

View File

@@ -0,0 +1,26 @@
#!/usr/bin/env python
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
import sys
from io import StringIO
from ruamel.yaml import YAML # type: ignore[import-not-found]
def main() -> None:
yaml = YAML(typ="rt")
yaml.indent(mapping=2, sequence=4, offset=2)
# Load
data = yaml.load(sys.stdin)
# Dump
sio = StringIO()
yaml.dump(data, sio)
print(sio.getvalue().rstrip("\n"))
if __name__ == "__main__":
main()

View File

@@ -13,6 +13,34 @@ Use the filter :ansplugin:`community.general.keep_keys#filter` if you have a lis
Let us use the below list in the following examples:
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
- name: set-template
template:
env:
ANSIBLE_CALLBACK_RESULT_FORMAT: yaml
variables:
data:
previous_code_block: yaml
previous_code_block_index: 0
computation:
previous_code_block: yaml+jinja
postprocessors:
- name: reformat-yaml
language: yaml
skip_first_lines: 2
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
@{{ computation | indent(8) }}@
ansible.builtin.debug:
var: result
.. code-block:: yaml
input:
@@ -37,24 +65,48 @@ Let us use the below list in the following examples:
gives
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
:emphasize-lines: 1-
result:
- {k0_x0: A0, k1_x1: B0}
- {k0_x0: A1, k1_x1: B1}
- k0_x0: A0
k1_x1: B0
- k0_x0: A1
k1_x1: B1
.. versionadded:: 9.1.0
* The results of the below examples 1-5 are all the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: equal
target: ['k0_x0', 'k1_x1']
result: "{{ input | community.general.keep_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- {k0_x0: A0, k1_x1: B0}
- {k0_x0: A1, k1_x1: B1}
- k0_x0: A0
k1_x1: B0
- k0_x0: A1
k1_x1: B1
1. Match keys that equal any of the items in the target.
@@ -105,12 +157,28 @@ gives
* The results of the below examples 6-9 are all the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: equal
target: k0_x0
result: "{{ input | community.general.keep_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- {k0_x0: A0}
- {k0_x0: A1}
- k0_x0: A0
- k0_x0: A1
6. Match keys that equal the target.
@@ -148,4 +216,3 @@ gives
mp: regex
target: ^.*0_x.*$
result: "{{ input | community.general.keep_keys(target=target, matching_parameter=mp) }}"

View File

@@ -13,6 +13,34 @@ Use the filter :ansplugin:`community.general.remove_keys#filter` if you have a l
Let us use the below list in the following examples:
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
- name: set-template
template:
env:
ANSIBLE_CALLBACK_RESULT_FORMAT: yaml
variables:
data:
previous_code_block: yaml
previous_code_block_index: 0
computation:
previous_code_block: yaml+jinja
postprocessors:
- name: reformat-yaml
language: yaml
skip_first_lines: 2
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
@{{ computation | indent(8) }}@
ansible.builtin.debug:
var: result
.. code-block:: yaml
input:
@@ -37,13 +65,19 @@ Let us use the below list in the following examples:
gives
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
:emphasize-lines: 1-
result:
- k2_x2: [C0]
- k2_x2:
- C0
k3_x3: foo
- k2_x2: [C1]
- k2_x2:
- C1
k3_x3: bar
@@ -51,13 +85,31 @@ gives
* The results of the below examples 1-5 are all the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: equal
target: ['k0_x0', 'k1_x1']
result: "{{ input | community.general.remove_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- k2_x2: [C0]
- k2_x2:
- C0
k3_x3: foo
- k2_x2: [C1]
- k2_x2:
- C1
k3_x3: bar
@@ -109,15 +161,33 @@ gives
* The results of the below examples 6-9 are all the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: equal
target: k0_x0
result: "{{ input | community.general.remove_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- k1_x1: B0
k2_x2: [C0]
k2_x2:
- C0
k3_x3: foo
- k1_x1: B1
k2_x2: [C1]
k2_x2:
- C1
k3_x3: bar
@@ -156,4 +226,3 @@ gives
mp: regex
target: ^.*0_x.*$
result: "{{ input | community.general.remove_keys(target=target, matching_parameter=mp) }}"

View File

@@ -13,6 +13,34 @@ Use the filter :ansplugin:`community.general.replace_keys#filter` if you have a
Let us use the below list in the following examples:
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
- name: set-template
template:
env:
ANSIBLE_CALLBACK_RESULT_FORMAT: yaml
variables:
data:
previous_code_block: yaml
previous_code_block_index: 0
computation:
previous_code_block: yaml+jinja
postprocessors:
- name: reformat-yaml
language: yaml
skip_first_lines: 2
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
@{{ computation | indent(8) }}@
ansible.builtin.debug:
var: result
.. code-block:: yaml
input:
@@ -40,17 +68,23 @@ Let us use the below list in the following examples:
gives
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
:emphasize-lines: 1-
result:
- a0: A0
a1: B0
k2_x2: [C0]
k2_x2:
- C0
k3_x3: foo
- a0: A1
a1: B1
k2_x2: [C1]
k2_x2:
- C1
k3_x3: bar
@@ -58,17 +92,37 @@ gives
* The results of the below examples 1-3 are all the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: starts_with
target:
- {after: a0, before: k0}
- {after: a1, before: k1}
result: "{{ input | community.general.replace_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- a0: A0
a1: B0
k2_x2: [C0]
k2_x2:
- C0
k3_x3: foo
- a0: A1
a1: B1
k2_x2: [C1]
k2_x2:
- C1
k3_x3: bar
@@ -111,12 +165,29 @@ gives
* The results of the below examples 4-5 are the same:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
# I picked one of the examples
mp: regex
target:
- {after: X, before: ^.*_x.*$}
result: "{{ input | community.general.replace_keys(target=target, matching_parameter=mp) }}"
ansible.builtin.debug:
var: result
.. code-block:: yaml
:emphasize-lines: 1-
result:
- {X: foo}
- {X: bar}
- X: foo
- X: bar
4. If more keys match the same attribute before the last one will be used.
@@ -145,6 +216,11 @@ gives
6. If there are more matches for a key the first one will be used.
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
.. code-block:: yaml
:emphasize-lines: 1-
@@ -165,11 +241,17 @@ gives
gives
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
:emphasize-lines: 1-
result:
- {X: A, bbb1: B, ccc1: C}
- {X: D, bbb2: E, ccc2: F}
- X: A
bbb1: B
ccc1: C
- X: D
bbb2: E
ccc2: F

View File

@@ -20,6 +20,17 @@ The :ansplugin:`community.general.counter filter plugin <community.general.count
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Count character occurrences in a string] ********************************************
@@ -72,9 +83,20 @@ This plugin is useful for selecting resources based on current allocation:
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Get ID of SCSI controller(s) with less than 4 disks attached and choose the one with the least disks]
TASK [Get ID of SCSI controller(s) with less than 4 disks attached and choose the one with the least disks] ***
ok: [localhost] => {
"msg": "scsi_2"
}

View File

@@ -31,16 +31,27 @@ You can use the :ansplugin:`community.general.dict_kv filter <community.general.
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Create a single-entry dictionary] **************************************************
TASK [Create a single-entry dictionary] ***************************************************
ok: [localhost] => {
"msg": {
"thatsmyvar": "myvalue"
}
}
TASK [Create a list of dictionaries where the 'server' field is taken from a list] *******
TASK [Create a list of dictionaries where the 'server' field is taken from a list] ********
ok: [localhost] => {
"msg": [
{
@@ -87,9 +98,20 @@ If you need to convert a list of key-value pairs to a dictionary, you can use th
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Create a dictionary with the dict function] ****************************************
TASK [Create a dictionary with the dict function] *****************************************
ok: [localhost] => {
"msg": {
"1": 2,
@@ -97,7 +119,7 @@ This produces:
}
}
TASK [Create a dictionary with the community.general.dict filter] ************************
TASK [Create a dictionary with the community.general.dict filter] *************************
ok: [localhost] => {
"msg": {
"1": 2,
@@ -105,7 +127,7 @@ This produces:
}
}
TASK [Create a list of dictionaries with map and the community.general.dict filter] ******
TASK [Create a list of dictionaries with map and the community.general.dict filter] *******
ok: [localhost] => {
"msg": [
{

View File

@@ -22,6 +22,49 @@ One example is ``ansible_facts.mounts``, which is a list of dictionaries where e
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
skip_first_lines: 3 # the set_fact task
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- ansible.builtin.set_fact:
ansible_facts:
mounts:
- block_available: 2000
block_size: 4096
block_total: 2345
block_used: 345
device: "/dev/sda1"
fstype: "ext4"
inode_available: 500
inode_total: 512
inode_used: 12
mount: "/boot"
options: "rw,relatime,data=ordered"
size_available: 56821
size_total: 543210
uuid: "ab31cade-d9c1-484d-8482-8a4cbee5241a"
- block_available: 1234
block_size: 4096
block_total: 12345
block_used: 11111
device: "/dev/sda2"
fstype: "ext4"
inode_available: 1111
inode_total: 1234
inode_used: 123
mount: "/"
options: "rw,relatime"
size_available: 42143
size_total: 543210
uuid: "abcdef01-2345-6789-0abc-def012345678"
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Output mount facts grouped by device name] ******************************************
@@ -79,7 +122,7 @@ This produces:
"options": "rw,relatime",
"size_available": 42143,
"size_total": 543210,
"uuid": "bdf50b7d-4859-40af-8665-c637ee7a7808"
"uuid": "abcdef01-2345-6789-0abc-def012345678"
},
"/boot": {
"block_available": 2000,

View File

@@ -21,6 +21,34 @@ These filters preserve the item order, eliminate duplicates and are an extended
Let us use the lists below in the following examples:
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
- name: set-template
template:
env:
ANSIBLE_CALLBACK_RESULT_FORMAT: yaml
variables:
data:
previous_code_block: yaml
previous_code_block_index: 0
computation:
previous_code_block: yaml+jinja
postprocessors:
- name: reformat-yaml
language: yaml
skip_first_lines: 2
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
@{{ computation | indent(8) }}@
ansible.builtin.debug:
var: result
.. code-block:: yaml
A: [9, 5, 7, 1, 9, 4, 10, 5, 9, 7]
@@ -35,9 +63,22 @@ The union of ``A`` and ``B`` can be written as:
This statement produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
result: [9, 5, 7, 1, 4, 10, 2, 8, 3]
result:
- 9
- 5
- 7
- 1
- 4
- 10
- 2
- 8
- 3
If you want to calculate the intersection of ``A``, ``B`` and ``C``, you can use the following statement:
@@ -59,9 +100,14 @@ or
All three statements are equivalent and give:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
result: [1]
result:
- 1
.. note:: Be aware that in most cases, filter calls without any argument require ``flatten=true``, otherwise the input is returned as result. The reason for this is, that the input is considered as a variable argument and is wrapped by an additional outer list. ``flatten=true`` ensures that this list is removed before the input is processed by the filter logic.
@@ -75,7 +121,14 @@ For example, the symmetric difference of ``A``, ``B`` and ``C`` may be written a
This gives:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
result: [5, 8, 3, 1]
result:
- 5
- 8
- 3
- 1

View File

@@ -12,6 +12,34 @@ If you have two or more lists of dictionaries and want to combine them into a li
Let us use the lists below in the following examples:
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
- name: set-template
template:
env:
ANSIBLE_CALLBACK_RESULT_FORMAT: yaml
variables:
data:
previous_code_block: yaml
previous_code_block_index: 0
computation:
previous_code_block: yaml+jinja
postprocessors:
- name: reformat-yaml
language: yaml
skip_first_lines: 2
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- vars:
@{{ data | indent(8) }}@
@{{ computation | indent(8) }}@
ansible.builtin.debug:
var: list3
.. code-block:: yaml
list1:
@@ -34,13 +62,22 @@ In the example below the lists are merged by the attribute ``name``:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- {name: bar, extra: false}
- {name: baz, path: /baz}
- {name: foo, extra: true, path: /foo}
- {name: meh, extra: true}
- extra: false
name: bar
- name: baz
path: /baz
- extra: true
name: foo
path: /foo
- extra: true
name: meh
.. versionadded:: 2.0.0
@@ -56,13 +93,22 @@ It is possible to use a list of lists as an input of the filter:
This produces the same result as in the previous example:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- {name: bar, extra: false}
- {name: baz, path: /baz}
- {name: foo, extra: true, path: /foo}
- {name: meh, extra: true}
- extra: false
name: bar
- name: baz
path: /baz
- extra: true
name: foo
path: /foo
- extra: true
name: meh
Single list
"""""""""""
@@ -75,13 +121,22 @@ It is possible to merge single list:
This produces the same result as in the previous example:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- {name: bar, extra: false}
- {name: baz, path: /baz}
- {name: foo, extra: true, path: /foo}
- {name: meh, extra: true}
- extra: false
name: bar
- name: baz
path: /baz
- extra: true
name: foo
path: /foo
- extra: true
name: meh
The filter also accepts two optional parameters: :ansopt:`community.general.lists_mergeby#filter:recursive` and :ansopt:`community.general.lists_mergeby#filter:list_merge`. This is available since community.general 4.4.0.
@@ -96,6 +151,11 @@ The examples below set :ansopt:`community.general.lists_mergeby#filter:recursive
Let us use the lists below in the following examples
.. ansible-output-meta::
actions:
- name: reset-previous-blocks
.. code-block:: yaml
list1:
@@ -128,17 +188,25 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=replace` (def
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- patch_value
x: default_value
y: patch_value
list: [patch_value]
z: patch_value
- name: myname02
param01: [3, 4, 4]
param01:
- 3
- 4
- 4
list_merge=keep
"""""""""""""""
@@ -153,17 +221,26 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=keep`:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- default_value
x: default_value
y: patch_value
list: [default_value]
z: patch_value
- name: myname02
param01: [1, 1, 2, 3]
param01:
- 1
- 1
- 2
- 3
list_merge=append
"""""""""""""""""
@@ -178,17 +255,30 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=append`:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- default_value
- patch_value
x: default_value
y: patch_value
list: [default_value, patch_value]
z: patch_value
- name: myname02
param01: [1, 1, 2, 3, 3, 4, 4]
param01:
- 1
- 1
- 2
- 3
- 3
- 4
- 4
list_merge=prepend
""""""""""""""""""
@@ -203,17 +293,30 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=prepend`:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- patch_value
- default_value
x: default_value
y: patch_value
list: [patch_value, default_value]
z: patch_value
- name: myname02
param01: [3, 4, 4, 1, 1, 2, 3]
param01:
- 3
- 4
- 4
- 1
- 1
- 2
- 3
list_merge=append_rp
""""""""""""""""""""
@@ -228,17 +331,29 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=append_rp`:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- default_value
- patch_value
x: default_value
y: patch_value
list: [default_value, patch_value]
z: patch_value
- name: myname02
param01: [1, 1, 2, 3, 4, 4]
param01:
- 1
- 1
- 2
- 3
- 4
- 4
list_merge=prepend_rp
"""""""""""""""""""""
@@ -253,15 +368,26 @@ Example :ansopt:`community.general.lists_mergeby#filter:list_merge=prepend_rp`:
This produces:
.. ansible-output-data::
playbook: ~
.. code-block:: yaml
list3:
- name: myname01
param01:
list:
- patch_value
- default_value
x: default_value
y: patch_value
list: [patch_value, default_value]
z: patch_value
- name: myname02
param01: [3, 4, 4, 1, 1, 2]
param01:
- 3
- 4
- 4
- 1
- 1
- 2

View File

@@ -24,6 +24,17 @@ Ansible offers the :ansplugin:`community.general.read_csv module <community.gene
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Parse CSV from string] **************************************************************
@@ -69,6 +80,34 @@ Converting to JSON
This produces:
.. ansible-output-data::
skip_first_lines: 3
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- ansible.builtin.set_fact:
result_stdout: |-
bin
boot
dev
etc
home
lib
proc
root
run
tmp
- name: Run 'ls' to list files in /
command: ls /
register: result
- name: Parse the ls output
debug:
msg: "{{ result_stdout | community.general.jc('ls') }}"
.. code-block:: ansible-output
TASK [Run 'ls' to list files in /] ********************************************************

View File

@@ -25,6 +25,17 @@ Hashids
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Create hashid] **********************************************************************
@@ -66,16 +77,32 @@ You can use the :ansplugin:`community.general.random_mac filter <community.gener
This produces:
.. ansible-output-data::
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
- name: "Create a random MAC starting with ff:"
debug:
# We're using a seed here to avoid randomness in the output
msg: "{{ 'FF' | community.general.random_mac(seed='') }}"
- name: "Create a random MAC starting with 00:11:22:"
debug:
# We're using a seed here to avoid randomness in the output
msg: "{{ '00:11:22' | community.general.random_mac(seed='') }}"
.. code-block:: ansible-output
TASK [Create a random MAC starting with ff:] **********************************************
ok: [localhost] => {
"msg": "ff:69:d3:78:7f:b4"
"msg": "ff:84:f5:d1:59:20"
}
TASK [Create a random MAC starting with 00:11:22:] ****************************************
ok: [localhost] => {
"msg": "00:11:22:71:5d:3b"
"msg": "00:11:22:84:f5:d1"
}
You can also initialize the random number generator from a seed to create random-but-idempotent MAC addresses:

View File

@@ -69,21 +69,32 @@ Note that months and years are using a simplified representation: a month is 30
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Convert string to seconds] **********************************************************
ok: [localhost] => {
"msg": "109210.123"
"msg": 109210.123
}
TASK [Convert string to hours] ************************************************************
ok: [localhost] => {
"msg": "30.336145277778"
"msg": 30.336145277778
}
TASK [Convert string to years (using 365.25 days == 1 year)] ******************************
ok: [localhost] => {
"msg": "1.096851471595"
"msg": 1.096851471595
}
.. versionadded: 0.2.0

View File

@@ -21,9 +21,20 @@ You can use the :ansplugin:`community.general.unicode_normalize filter <communit
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Compare Unicode representations] ********************************************************
TASK [Compare Unicode representations] ****************************************************
ok: [localhost] => {
"msg": true
}

View File

@@ -23,6 +23,17 @@ If you need to sort a list of version numbers, the Jinja ``sort`` filter is prob
This produces:
.. ansible-output-data::
variables:
task:
previous_code_block: yaml+jinja
playbook: |-
- hosts: localhost
gather_facts: false
tasks:
@{{ task | indent(4) }}@
.. code-block:: ansible-output
TASK [Sort list by version number] ********************************************************

View File

@@ -0,0 +1,114 @@
..
Copyright (c) Ansible Project
GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
SPDX-License-Identifier: GPL-3.0-or-later
.. _ansible_collections.community.general.docsite.guide_ee:
Execution Environment Guide
===========================
`Ansible Execution Environments <https://docs.ansible.com/projects/ansible/latest/getting_started_ee/index.html>`_
(EEs) are container images that bundle ansible-core, collections, and their Python and system dependencies.
They are the standard runtime for Red Hat Ansible Automation Platform and AWX, replacing the older virtualenv model.
They can also be used outside of RHAAP and AWX by using `ansible-navigator <https://docs.ansible.com/projects/navigator/>`__, or by using ansible-runner directly.
What runs in the EE
^^^^^^^^^^^^^^^^^^^
Only **controller-side plugins** run inside the EE. Their Python and system dependencies must be installed there.
This includes: lookup plugins, inventory plugins, callback plugins, connection plugins, become plugins, and filter plugins.
Modules run on the managed nodes and are transferred there at runtime — their dependencies must be present on the
target, not in the EE.
.. note::
Modules delegated to ``localhost`` (for example, those that interact with a remote API) are an exception:
they run on the controller and their dependencies must therefore be available in the EE.
Why community.general does not provide EE metadata
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
``community.general`` ships dozens of controller-side plugins covering a very broad range of technologies.
Bundling the dependencies for all of them into a single EE image would almost certainly create irreconcilable
conflicts — both within the collection and with other collections or tools (such as ``ansible-lint``) that
share the same image.
For that reason, ``community.general`` does **not** provide Python or system package dependency metadata.
Users are expected to build purpose-built, minimal EEs containing only the dependencies
required by the specific plugins they actually use.
Finding the dependencies you need
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Every plugin that has external dependencies documents them in its ``requirements`` field.
You can inspect those with ``ansible-doc``:
.. code-block:: shell
$ ansible-doc -t lookup community.general.some_lookup | grep -A 10 "REQUIREMENTS"
Or browse the plugin's documentation page on `docs.ansible.com <https://docs.ansible.com/ansible/latest/collections/community/general/>`_.
For example, a lookup plugin that wraps an external service might list:
.. code-block:: yaml
requirements:
- some-python-library >= 1.2
An inventory plugin backed by a REST API might list:
.. code-block:: yaml
requirements:
- requests
- some-sdk
These are the packages you need to add to your EE.
Building a minimal EE with ansible-builder
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
`ansible-builder <https://docs.ansible.com/projects/builder/en/latest/>`_ is the standard tool for creating EEs.
Install it with:
.. code-block:: shell
$ pip install ansible-builder
Create an ``execution-environment.yml`` **in your own project** — not inside ``community.general``
that includes only the dependencies needed for the plugins you use:
.. code-block:: yaml
version: 3
dependencies:
galaxy:
collections:
- name: community.general
python:
- some-python-library>=1.2
- requests
system:
- libxml2-devel [platform:rpm]
images:
base_image:
name: ghcr.io/ansible/community-ee-base:latest
Then build the image:
.. code-block:: shell
$ ansible-builder build -t my-custom-ee:latest
.. seealso::
- `ansible-builder documentation <https://docs.ansible.com/projects/builder/en/latest/>`_
- `Building EEs with ansible-builder <https://ansible-builder.readthedocs.io/en/latest/definition/>`_
- `Issue #2968 — original request for EE requirements support <https://github.com/ansible-collections/community.general/issues/2968>`_
- `Issue #4512 — design discussion for EE support in community.general <https://github.com/ansible-collections/community.general/issues/4512>`_

View File

@@ -12,7 +12,7 @@ The inventory plugin :ansplugin:`community.general.iocage#inventory` gets the in
See:
* `iocage - A FreeBSD Jail Manager <https://iocage.readthedocs.io/en/latest>`_
* `iocage - A FreeBSD Jail Manager <https://freebsd.github.io/iocage/>`_
* `man iocage <https://man.freebsd.org/cgi/man.cgi?query=iocage>`_
* `Jails and Containers <https://docs.freebsd.org/en/books/handbook/jails>`_

View File

@@ -20,7 +20,7 @@ As root at the iocage host, create three VNET jails with a DHCP interface from t
shell> iocage create --template ansible_client --name srv_3 bpf=1 dhcp=1 vnet=1
srv_3 successfully created!
See: `Configuring a VNET Jail <https://iocage.readthedocs.io/en/latest/networking.html#configuring-a-vnet-jail>`_.
See: `Configuring VNET <https://freebsd.github.io/iocage/networking.html#vimage-vnet>`_.
As admin at the controller, list the jails:
@@ -115,7 +115,7 @@ Optionally, create shared IP jails:
| None | srv_3 | off | down | jail | 14.2-RELEASE-p3 | em0|10.1.0.103/24 | - | ansible_client | no |
+------+-------+------+-------+------+-----------------+-------------------+-----+----------------+----------+
See: `Configuring a Shared IP Jail <https://iocage.readthedocs.io/en/latest/networking.html#configuring-a-shared-ip-jail>`_
See: `Configuring a Shared IP Jail <https://freebsd.github.io/iocage/networking.html#shared-ip>`_
If iocage needs environment variable(s), use the option :ansopt:`community.general.iocage#inventory:env`. For example,

View File

@@ -5,7 +5,7 @@
namespace: community
name: general
version: 11.4.4
version: 12.6.1
readme: README.md
authors:
- Ansible (https://github.com/ansible)
@@ -19,3 +19,5 @@ repository: https://github.com/ansible-collections/community.general
documentation: https://docs.ansible.com/projects/ansible/latest/collections/community/general/
homepage: https://github.com/ansible-collections/community.general
issues: https://github.com/ansible-collections/community.general/issues
build_ignore:
- .nox

View File

@@ -3,7 +3,7 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
requires_ansible: '>=2.16.0'
requires_ansible: '>=2.17.0'
action_groups:
consul:
- consul_agent_check
@@ -21,6 +21,7 @@ action_groups:
keycloak:
- keycloak_authentication
- keycloak_authentication_required_actions
- keycloak_authentication_v2
- keycloak_authz_authorization_scope
- keycloak_authz_custom_policy
- keycloak_authz_permission
@@ -40,12 +41,14 @@ action_groups:
- keycloak_realm
- keycloak_realm_key
- keycloak_realm_keys_metadata_info
- keycloak_realm_localization
- keycloak_realm_rolemapping
- keycloak_role
- keycloak_user
- keycloak_user_federation
- keycloak_user_rolemapping
- keycloak_userprofile
- keycloak_user_execute_actions_email
scaleway:
- scaleway_compute
- scaleway_compute_private_network
@@ -100,7 +103,7 @@ plugin_routing:
warning_text: Use the 'default' callback plugin with 'display_failed_stderr
= yes' option.
yaml:
deprecation:
tombstone:
removal_version: 12.0.0
warning_text: >-
The plugin has been superseded by the option `result_format=yaml` in callback plugin ansible.builtin.default from ansible-core 2.13 onwards.
@@ -153,7 +156,7 @@ plugin_routing:
removal_version: 13.0.0
warning_text: Project Atomic was sunset by the end of 2019.
bearychat:
deprecation:
tombstone:
removal_version: 12.0.0
warning_text: Chat service is no longer available.
catapult:
@@ -202,6 +205,14 @@ plugin_routing:
tombstone:
removal_version: 10.0.0
warning_text: Use community.general.consul_token and/or community.general.consul_policy instead.
dimensiondata_network:
deprecation:
removal_version: 13.0.0
warning_text: Service and its endpoints are no longer available.
dimensiondata_vlan:
deprecation:
removal_version: 13.0.0
warning_text: Service and its endpoints are no longer available.
docker_compose:
redirect: community.docker.docker_compose
docker_config:
@@ -257,7 +268,7 @@ plugin_routing:
docker_volume_info:
redirect: community.docker.docker_volume_info
facter:
deprecation:
tombstone:
removal_version: 12.0.0
warning_text: Use community.general.facter_facts instead.
flowdock:
@@ -361,6 +372,26 @@ plugin_routing:
tombstone:
removal_version: 3.0.0
warning_text: Use community.general.hpilo_info instead.
aix_devices:
deprecation:
removal_version: 15.0.0
warning_text: Use ibm.power_aix.devices instead. The C(ibm.power_aix) collection is actively maintained by IBM.
aix_filesystem:
deprecation:
removal_version: 15.0.0
warning_text: Use ibm.power_aix.filesystem instead. The C(ibm.power_aix) collection is actively maintained by IBM.
aix_inittab:
deprecation:
removal_version: 15.0.0
warning_text: Use ibm.power_aix.inittab instead. The C(ibm.power_aix) collection is actively maintained by IBM.
aix_lvg:
deprecation:
removal_version: 15.0.0
warning_text: Use ibm.power_aix.lvg instead. The C(ibm.power_aix) collection is actively maintained by IBM.
aix_lvol:
deprecation:
removal_version: 15.0.0
warning_text: Use ibm.power_aix.lvol instead. The C(ibm.power_aix) collection is actively maintained by IBM.
idrac_firmware:
redirect: dellemc.openmanage.idrac_firmware
idrac_redfish_facts:
@@ -369,6 +400,10 @@ plugin_routing:
warning_text: Use community.general.idrac_redfish_info instead.
idrac_server_config_profile:
redirect: dellemc.openmanage.idrac_server_config_profile
jboss:
deprecation:
removal_version: 14.0.0
warning_text: Use role middleware_automation.wildfly.wildfly_app_deploy instead.
jenkins_job_facts:
tombstone:
removal_version: 3.0.0
@@ -389,6 +424,10 @@ plugin_routing:
redirect: community.kubevirt.kubevirt_template
kubevirt_vm:
redirect: community.kubevirt.kubevirt_vm
layman:
deprecation:
removal_version: 14.0.0
warning_text: Gentoo deprecated C(layman) in mid-2023.
ldap_attr:
tombstone:
removal_version: 3.0.0
@@ -493,6 +532,30 @@ plugin_routing:
tombstone:
removal_version: 3.0.0
warning_text: Use community.general.one_image_info instead.
oneandone_firewall_policy:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
oneandone_load_balancer:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
oneandone_monitoring_policy:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
oneandone_private_network:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
oneandone_public_ip:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
oneandone_server:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
onepassword_facts:
tombstone:
removal_version: 3.0.0
@@ -800,6 +863,10 @@ plugin_routing:
tombstone:
removal_version: 3.0.0
warning_text: Use purestorage.flashblade.purefb_info instead.
pushbullet:
deprecation:
removal_version: 13.0.0
warning_text: Module relies on Python package pushbullet.py which is not maintained and supports only up to Python 3.2.
python_requirements_facts:
tombstone:
removal_version: 3.0.0
@@ -996,12 +1063,24 @@ plugin_routing:
tombstone:
removal_version: 3.0.0
warning_text: Use community.general.smartos_image_info instead.
spotinst_aws_elastigroup:
deprecation:
removal_version: 13.0.0
warning_text: Module relies on unsupported Python package. Use the module spot.cloud_modules.aws_elastigroup instead.
stackdriver:
tombstone:
removal_version: 9.0.0
warning_text: This module relied on HTTPS APIs that do not exist anymore,
and any new development in the direction of providing an alternative should
happen in the context of the google.cloud collection.
swupd:
deprecation:
removal_version: 15.0.0
warning_text: ClearLinux was made EOL in July 2025. If you think the module is still useful for another distribution, please create an issue in the community.general repository.
typetalk:
deprecation:
removal_version: 13.0.0
warning_text: The typetalk service will be discontinued on Dec 2025.
vertica_facts:
tombstone:
removal_version: 3.0.0
@@ -1038,6 +1117,14 @@ plugin_routing:
doc_fragments:
_gcp:
redirect: community.google._gcp
dimensiondata:
deprecation:
removal_version: 13.0.0
warning_text: Service and its endpoints are no longer available.
dimensiondata_wait:
deprecation:
removal_version: 13.0.0
warning_text: Service and its endpoints are no longer available.
docker:
redirect: community.docker.docker
hetzner:
@@ -1080,7 +1167,7 @@ plugin_routing:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
purestorage:
deprecation:
tombstone:
removal_version: 12.0.0
warning_text: The modules for purestorage were removed in community.general 3.0.0, this document fragment was left behind.
rackspace:
@@ -1089,6 +1176,18 @@ plugin_routing:
warning_text: This doc fragment was used by rax modules, that relied on the deprecated
package pyrax.
module_utils:
cloud:
deprecation:
removal_version: 13.0.0
warning_text: This code is not used by community.general. If you want to use it in another collection, please copy it over.
database:
deprecation:
removal_version: 13.0.0
warning_text: This code is not used by community.general. If you want to use it in another collection, please copy it over.
dimensiondata:
deprecation:
removal_version: 13.0.0
warning_text: Service and its endpoints are no longer available.
docker.common:
redirect: community.docker.common
docker.swarm:
@@ -1101,6 +1200,10 @@ plugin_routing:
redirect: community.google.gcp
hetzner:
redirect: community.hrobot.robot
known_hosts:
deprecation:
removal_version: 13.0.0
warning_text: This code is not used by community.general. If you want to use it in another collection, please copy it over.
kubevirt:
redirect: community.kubevirt.kubevirt
net_tools.nios.api:
@@ -1109,6 +1212,10 @@ plugin_routing:
deprecation:
removal_version: 13.0.0
warning_text: Code is unmaintained here and official Oracle collection is available for a number of years.
oneandone:
deprecation:
removal_version: 13.0.0
warning_text: DNS fails to resolve the API endpoint used by the module.
postgresql:
redirect: community.postgresql.postgresql
proxmox:
@@ -1117,7 +1224,7 @@ plugin_routing:
removal_version: 15.0.0
warning_text: The proxmox content has been moved to community.proxmox.
pure:
deprecation:
tombstone:
removal_version: 12.0.0
warning_text: The modules for purestorage were removed in community.general 3.0.0, this module util was left behind.
rax:
@@ -1128,6 +1235,10 @@ plugin_routing:
redirect: dellemc.openmanage.dellemc_idrac
remote_management.dellemc.ome:
redirect: dellemc.openmanage.ome
saslprep:
deprecation:
removal_version: 13.0.0
warning_text: This code is not used by community.general. If you want to use it in another collection, please copy it over.
inventory:
docker_machine:
redirect: community.docker.docker_machine

View File

@@ -6,13 +6,17 @@
# dependencies = ["nox>=2025.02.09", "antsibull-nox"]
# ///
import os
import sys
import nox
import nox # type: ignore[import-not-found]
# Whether the noxfile is running in CI:
IN_CI = os.environ.get("CI") == "true"
try:
import antsibull_nox
import antsibull_nox # type: ignore[import-not-found]
except ImportError:
print("You need to install antsibull-nox in the same Python environment as nox.")
sys.exit(1)
@@ -32,6 +36,23 @@ def botmeta(session: nox.Session) -> None:
session.run("python", "tests/sanity/extra/botmeta.py")
@nox.session(name="ansible-output", default=False)
def ansible_output(session: nox.Session) -> None:
session.install(
"ansible-core",
"antsibull-docs",
# Needed libs for some code blocks:
"jc",
"hashids",
# Tools for post-processing
"ruamel.yaml", # used by docs/docsite/reformat-yaml.py
)
args = []
if IN_CI:
args.append("--check")
session.run("antsibull-docs", "ansible-output", *args, *session.posargs)
# Allow to run the noxfile with `python noxfile.py`, `pipx run noxfile.py`, or similar.
# Requires nox >= 2025.02.09
if __name__ == "__main__":

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2020, quidame <quidame@poivron.org>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -6,85 +5,96 @@
from __future__ import annotations
import time
import typing as t
from ansible.plugins.action import ActionBase
from ansible.errors import AnsibleActionFail, AnsibleConnectionFailure
from ansible.utils.vars import merge_hash
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
from ansible.utils.vars import merge_hash
display = Display()
class ActionModule(ActionBase):
# Keep internal params away from user interactions
_VALID_ARGS = frozenset(('path', 'state', 'table', 'noflush', 'counters', 'modprobe', 'ip_version', 'wait'))
_VALID_ARGS = frozenset(("path", "state", "table", "noflush", "counters", "modprobe", "ip_version", "wait"))
DEFAULT_SUDOABLE = True
@staticmethod
def msg_error__async_and_poll_not_zero(task_poll, task_async, max_timeout):
def msg_error__async_and_poll_not_zero(task_poll, task_async, max_timeout) -> str:
return (
"This module doesn't support async>0 and poll>0 when its 'state' param "
"is set to 'restored'. To enable its rollback feature (that needs the "
"module to run asynchronously on the remote), please set task attribute "
f"'poll' (={task_poll}) to 0, and 'async' (={task_async}) to a value >2 and not greater than "
f"'ansible_timeout' (={max_timeout}) (recommended).")
f"'ansible_timeout' (={max_timeout}) (recommended)."
)
@staticmethod
def msg_warning__no_async_is_no_rollback(task_poll, task_async, max_timeout):
def msg_warning__no_async_is_no_rollback(task_poll, task_async, max_timeout) -> str:
return (
"Attempts to restore iptables state without rollback in case of mistake "
"may lead the ansible controller to loose access to the hosts and never "
"regain it before fixing firewall rules through a serial console, or any "
f"other way except SSH. Please set task attribute 'poll' (={task_poll}) to 0, and "
f"'async' (={task_async}) to a value >2 and not greater than 'ansible_timeout' (={max_timeout}) "
"(recommended).")
"(recommended)."
)
@staticmethod
def msg_warning__async_greater_than_timeout(task_poll, task_async, max_timeout):
def msg_warning__async_greater_than_timeout(task_poll, task_async, max_timeout) -> str:
return (
"You attempt to restore iptables state with rollback in case of mistake, "
"but with settings that will lead this rollback to happen AFTER that the "
"controller will reach its own timeout. Please set task attribute 'poll' "
f"(={task_poll}) to 0, and 'async' (={task_async}) to a value >2 and not greater than "
f"'ansible_timeout' (={max_timeout}) (recommended).")
f"'ansible_timeout' (={max_timeout}) (recommended)."
)
def _async_result(self, async_status_args, task_vars, timeout):
'''
def _async_result(
self, async_status_args: dict[str, t.Any], task_vars: dict[str, t.Any], timeout: int
) -> dict[str, t.Any]:
"""
Retrieve results of the asynchronous task, and display them in place of
the async wrapper results (those with the ansible_job_id key).
'''
"""
async_status = self._task.copy()
async_status.args = async_status_args
async_status.action = 'ansible.builtin.async_status'
async_status.action = "ansible.builtin.async_status"
async_status.async_val = 0
async_action = self._shared_loader_obj.action_loader.get(
async_status.action, task=async_status, connection=self._connection,
play_context=self._play_context, loader=self._loader, templar=self._templar,
shared_loader_obj=self._shared_loader_obj)
async_status.action,
task=async_status,
connection=self._connection,
play_context=self._play_context,
loader=self._loader,
templar=self._templar,
shared_loader_obj=self._shared_loader_obj,
)
if async_status.args['mode'] == 'cleanup':
if async_status.args["mode"] == "cleanup":
return async_action.run(task_vars=task_vars)
# At least one iteration is required, even if timeout is 0.
for dummy in range(max(1, timeout)):
async_result = async_action.run(task_vars=task_vars)
if async_result.get('finished', 0) == 1:
if async_result.get("finished", 0) == 1:
break
time.sleep(min(1, timeout))
return async_result
def run(self, tmp=None, task_vars=None):
def run(self, tmp: str | None = None, task_vars: dict[str, t.Any] | None = None) -> dict[str, t.Any]:
self._supports_check_mode = True
self._supports_async = True
result = super(ActionModule, self).run(tmp, task_vars)
if task_vars is None:
task_vars = {}
result = super().run(tmp, task_vars)
del tmp # tmp no longer has any effect
if not result.get('skipped'):
if not result.get("skipped"):
# FUTURE: better to let _execute_module calculate this internally?
wrap_async = self._task.async_val and not self._connection.has_native_async
@@ -99,41 +109,38 @@ class ActionModule(ActionBase):
starter_cmd = None
confirm_cmd = None
if module_args.get('state', None) == 'restored':
if module_args.get("state", None) == "restored":
if not wrap_async:
if not check_mode:
display.warning(self.msg_error__async_and_poll_not_zero(
task_poll,
task_async,
max_timeout))
display.warning(self.msg_error__async_and_poll_not_zero(task_poll, task_async, max_timeout))
elif task_poll:
raise AnsibleActionFail(self.msg_warning__no_async_is_no_rollback(
task_poll,
task_async,
max_timeout))
raise AnsibleActionFail(
self.msg_warning__no_async_is_no_rollback(task_poll, task_async, max_timeout)
)
else:
if task_async > max_timeout and not check_mode:
display.warning(self.msg_warning__async_greater_than_timeout(
task_poll,
task_async,
max_timeout))
display.warning(
self.msg_warning__async_greater_than_timeout(task_poll, task_async, max_timeout)
)
# inject the async directory based on the shell option into the
# module args
async_dir = self.get_shell_option('async_dir', default="~/.ansible_async")
async_dir = self.get_shell_option("async_dir", default="~/.ansible_async")
# Bind the loop max duration to consistent values on both
# remote and local sides (if not the same, make the loop
# longer on the controller); and set a backup file path.
module_args['_timeout'] = task_async
module_args['_back'] = f'{async_dir}/iptables.state'
async_status_args = dict(mode='status')
module_args["_timeout"] = task_async
module_args["_back"] = f"{async_dir}/iptables.state"
async_status_args = dict(mode="status")
confirm_cmd = f"rm -f {module_args['_back']}"
starter_cmd = f"touch {module_args['_back']}.starter"
remaining_time = max(task_async, max_timeout)
# do work!
result = merge_hash(result, self._execute_module(module_args=module_args, task_vars=task_vars, wrap_async=wrap_async))
result = merge_hash(
result, self._execute_module(module_args=module_args, task_vars=task_vars, wrap_async=wrap_async)
)
# Then the 3-steps "go ahead or rollback":
# 1. Catch early errors of the module (in asynchronous task) if any.
@@ -141,9 +148,9 @@ class ActionModule(ActionBase):
# 2. Reset connection to ensure a persistent one will not be reused.
# 3. Confirm the restored state by removing the backup on the remote.
# Retrieve the results of the asynchronous task to return them.
if '_back' in module_args:
async_status_args['jid'] = result.get('ansible_job_id', None)
if async_status_args['jid'] is None:
if "_back" in module_args:
async_status_args["jid"] = result.get("ansible_job_id", None)
if async_status_args["jid"] is None:
raise AnsibleActionFail("Unable to get 'ansible_job_id'.")
# Catch early errors due to missing mandatory option, bad
@@ -157,7 +164,7 @@ class ActionModule(ActionBase):
# As the main command is not yet executed on the target, here
# 'finished' means 'failed before main command be executed'.
if not result['finished']:
if not result["finished"]:
try:
self._connection.reset()
except AttributeError:
@@ -179,16 +186,16 @@ class ActionModule(ActionBase):
result = merge_hash(result, self._async_result(async_status_args, task_vars, remaining_time))
# Cleanup async related stuff and internal params
for key in ('ansible_job_id', 'results_file', 'started', 'finished'):
for key in ("ansible_job_id", "results_file", "started", "finished"):
if result.get(key):
del result[key]
if result.get('invocation', {}).get('module_args'):
for key in ('_back', '_timeout', '_async_dir', 'jid'):
if result['invocation']['module_args'].get(key):
del result['invocation']['module_args'][key]
if result.get("invocation", {}).get("module_args"):
for key in ("_back", "_timeout", "_async_dir", "jid"):
if result["invocation"]["module_args"].get(key):
del result["invocation"]["module_args"][key]
async_status_args['mode'] = 'cleanup'
async_status_args["mode"] = "cleanup"
dummy = self._async_result(async_status_args, task_vars, 0)
if not wrap_async:

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2020, Amin Vakil <info@aminvakil.com>
# Copyright (c) 2016-2018, Matt Davis <mdavis@ansible.com>
# Copyright (c) 2018, Sam Doran <sdoran@redhat.com>
@@ -7,121 +6,117 @@
from __future__ import annotations
import typing as t
from ansible.errors import AnsibleError, AnsibleConnectionFailure
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.errors import AnsibleConnectionFailure, AnsibleError
from ansible.module_utils.common.collections import is_string
from ansible.module_utils.common.text.converters import to_native, to_text
from ansible.plugins.action import ActionBase
from ansible.utils.display import Display
if t.TYPE_CHECKING:
class Distribution(t.TypedDict):
name: str
version: str
family: str
display = Display()
def fmt(mapping, key):
return to_native(mapping[key]).strip()
class TimedOutException(Exception):
pass
class ActionModule(ActionBase):
TRANSFERS_FILES = False
_VALID_ARGS = frozenset((
'msg',
'delay',
'search_paths'
))
_VALID_ARGS = frozenset(("msg", "delay", "search_paths"))
DEFAULT_CONNECT_TIMEOUT = None
DEFAULT_PRE_SHUTDOWN_DELAY = 0
DEFAULT_SHUTDOWN_MESSAGE = 'Shut down initiated by Ansible'
DEFAULT_SHUTDOWN_COMMAND = 'shutdown'
DEFAULT_SHUTDOWN_MESSAGE = "Shut down initiated by Ansible"
DEFAULT_SHUTDOWN_COMMAND = "shutdown"
DEFAULT_SHUTDOWN_COMMAND_ARGS = '-h {delay_min} "{message}"'
DEFAULT_SUDOABLE = True
SHUTDOWN_COMMANDS = {
'alpine': 'poweroff',
'vmkernel': 'halt',
"alpine": "poweroff",
"vmkernel": "halt",
}
SHUTDOWN_COMMAND_ARGS = {
'alpine': '',
'void': '-h +{delay_min} "{message}"',
'freebsd': '-p +{delay_sec}s "{message}"',
'linux': DEFAULT_SHUTDOWN_COMMAND_ARGS,
'macosx': '-h +{delay_min} "{message}"',
'openbsd': '-h +{delay_min} "{message}"',
'solaris': '-y -g {delay_sec} -i 5 "{message}"',
'sunos': '-y -g {delay_sec} -i 5 "{message}"',
'vmkernel': '-d {delay_sec}',
'aix': '-Fh',
"alpine": "",
"void": '-h +{delay_min} "{message}"',
"freebsd": '-p +{delay_sec}s "{message}"',
"linux": DEFAULT_SHUTDOWN_COMMAND_ARGS,
"macosx": '-h +{delay_min} "{message}"',
"openbsd": '-h +{delay_min} "{message}"',
"solaris": '-y -g {delay_sec} -i 5 "{message}"',
"sunos": '-y -g {delay_sec} -i 5 "{message}"',
"vmkernel": "-d {delay_sec}",
"aix": "-Fh",
}
def __init__(self, *args, **kwargs):
super(ActionModule, self).__init__(*args, **kwargs)
super().__init__(*args, **kwargs)
@property
def delay(self):
return self._check_delay('delay', self.DEFAULT_PRE_SHUTDOWN_DELAY)
return self._check_delay("delay", self.DEFAULT_PRE_SHUTDOWN_DELAY)
def _check_delay(self, key, default):
def _check_delay(self, key: str, default: int) -> int:
"""Ensure that the value is positive or zero"""
value = int(self._task.args.get(key, default))
if value < 0:
value = 0
return value
def _get_value_from_facts(self, variable_name, distribution, default_value):
@staticmethod
def _get_value_from_facts(data: dict[str, str], distribution: Distribution, default_value: str) -> str:
"""Get dist+version specific args first, then distribution, then family, lastly use default"""
attr = getattr(self, variable_name)
value = attr.get(
distribution['name'] + distribution['version'],
attr.get(
distribution['name'],
attr.get(
distribution['family'],
getattr(self, default_value))))
return value
return data.get(
distribution["name"] + distribution["version"],
data.get(distribution["name"], data.get(distribution["family"], default_value)),
)
def get_distribution(self, task_vars):
def get_distribution(self, task_vars: dict[str, t.Any]) -> Distribution:
# FIXME: only execute the module if we don't already have the facts we need
distribution = {}
display.debug(f'{self._task.action}: running setup module to get distribution')
display.debug(f"{self._task.action}: running setup module to get distribution")
module_output = self._execute_module(
task_vars=task_vars,
module_name='ansible.legacy.setup',
module_args={'gather_subset': 'min'})
task_vars=task_vars, module_name="ansible.legacy.setup", module_args={"gather_subset": "min"}
)
try:
if module_output.get('failed', False):
raise AnsibleError(f"Failed to determine system distribution. {fmt(module_output, 'module_stdout')}, {fmt(module_output, 'module_stderr')}")
distribution['name'] = module_output['ansible_facts']['ansible_distribution'].lower()
distribution['version'] = to_text(
module_output['ansible_facts']['ansible_distribution_version'].split('.')[0])
distribution['family'] = to_text(module_output['ansible_facts']['ansible_os_family'].lower())
if module_output.get("failed", False):
raise AnsibleError(
f"Failed to determine system distribution. {to_native(module_output['module_stdout'])}, {to_native(module_output['module_stderr'])}"
)
distribution: Distribution = {
"name": module_output["ansible_facts"]["ansible_distribution"].lower(),
"version": to_text(module_output["ansible_facts"]["ansible_distribution_version"].split(".")[0]),
"family": to_text(module_output["ansible_facts"]["ansible_os_family"].lower()),
}
display.debug(f"{self._task.action}: distribution: {distribution}")
return distribution
except KeyError as ke:
raise AnsibleError(f'Failed to get distribution information. Missing "{ke.args[0]}" in output.')
raise AnsibleError(f'Failed to get distribution information. Missing "{ke.args[0]}" in output.') from ke
def get_shutdown_command(self, task_vars, distribution):
def find_command(command, find_search_paths):
display.debug(f'{self._task.action}: running find module looking in {find_search_paths} to get path for "{command}"')
def get_shutdown_command(self, task_vars: dict[str, t.Any], distribution: Distribution) -> str:
def find_command(command: str, find_search_paths: list[str]) -> list[str]:
display.debug(
f'{self._task.action}: running find module looking in {find_search_paths} to get path for "{command}"'
)
find_result = self._execute_module(
task_vars=task_vars,
# prevent collection search by calling with ansible.legacy (still allows library/ override of find)
module_name='ansible.legacy.find',
module_args={
'paths': find_search_paths,
'patterns': [command],
'file_type': 'any'
}
module_name="ansible.legacy.find",
module_args={"paths": find_search_paths, "patterns": [command], "file_type": "any"},
)
return [x['path'] for x in find_result['files']]
return [x["path"] for x in find_result["files"]]
shutdown_bin = self._get_value_from_facts('SHUTDOWN_COMMANDS', distribution, 'DEFAULT_SHUTDOWN_COMMAND')
default_search_paths = ['/sbin', '/usr/sbin', '/usr/local/sbin']
search_paths = self._task.args.get('search_paths', default_search_paths)
shutdown_bin = self._get_value_from_facts(self.SHUTDOWN_COMMANDS, distribution, self.DEFAULT_SHUTDOWN_COMMAND)
default_search_paths = ["/sbin", "/usr/sbin", "/usr/local/sbin"]
search_paths = self._task.args.get("search_paths", default_search_paths)
# FIXME: switch all this to user arg spec validation methods when they are available
# Convert bare strings to a list
@@ -132,36 +127,38 @@ class ActionModule(ActionBase):
incorrect_type = any(not is_string(x) for x in search_paths)
if not isinstance(search_paths, list) or incorrect_type:
raise TypeError
except TypeError:
except TypeError as e:
# Error if we didn't get a list
err_msg = f"'search_paths' must be a string or flat list of strings, got {search_paths}"
raise AnsibleError(err_msg)
raise AnsibleError(err_msg) from e
full_path = find_command(shutdown_bin, search_paths) # find the path to the shutdown command
if not full_path: # if we could not find the shutdown command
# tell the user we will try with systemd
display.vvv(f'Unable to find command "{shutdown_bin}" in search paths: {search_paths}, will attempt a shutdown using systemd directly.')
systemctl_search_paths = ['/bin', '/usr/bin']
full_path = find_command('systemctl', systemctl_search_paths) # find the path to the systemctl command
display.vvv(
f'Unable to find command "{shutdown_bin}" in search paths: {search_paths}, will attempt a shutdown using systemd directly.'
)
systemctl_search_paths = ["/bin", "/usr/bin"]
full_path = find_command("systemctl", systemctl_search_paths) # find the path to the systemctl command
if not full_path: # if we couldn't find systemctl
raise AnsibleError(
f'Could not find command "{shutdown_bin}" in search paths: {search_paths} or systemctl'
f' command in search paths: {systemctl_search_paths}, unable to shutdown.') # we give up here
f" command in search paths: {systemctl_search_paths}, unable to shutdown."
) # we give up here
else:
return f"{full_path[0]} poweroff" # done, since we cannot use args with systemd shutdown
# systemd case taken care of, here we add args to the command
args = self._get_value_from_facts('SHUTDOWN_COMMAND_ARGS', distribution, 'DEFAULT_SHUTDOWN_COMMAND_ARGS')
args = self._get_value_from_facts(self.SHUTDOWN_COMMAND_ARGS, distribution, self.DEFAULT_SHUTDOWN_COMMAND_ARGS)
# Convert seconds to minutes. If less that 60, set it to 0.
delay_sec = self.delay
shutdown_message = self._task.args.get('msg', self.DEFAULT_SHUTDOWN_MESSAGE)
shutdown_message = self._task.args.get("msg", self.DEFAULT_SHUTDOWN_MESSAGE)
af = args.format(delay_sec=delay_sec, delay_min=delay_sec // 60, message=shutdown_message)
return f'{full_path[0]} {af}'
return f"{full_path[0]} {af}"
def perform_shutdown(self, task_vars, distribution):
result = {}
def perform_shutdown(self, task_vars, distribution) -> dict[str, t.Any]:
result: dict[str, t.Any] = {}
shutdown_result = {}
shutdown_command_exec = self.get_shutdown_command(task_vars, distribution)
@@ -170,40 +167,41 @@ class ActionModule(ActionBase):
display.vvv(f"{self._task.action}: shutting down server...")
display.debug(f"{self._task.action}: shutting down server with command '{shutdown_command_exec}'")
if self._play_context.check_mode:
shutdown_result['rc'] = 0
shutdown_result["rc"] = 0
else:
shutdown_result = self._low_level_execute_command(shutdown_command_exec, sudoable=self.DEFAULT_SUDOABLE)
except AnsibleConnectionFailure as e:
# If the connection is closed too quickly due to the system being shutdown, carry on
display.debug(
f'{self._task.action}: AnsibleConnectionFailure caught and handled: {e}')
shutdown_result['rc'] = 0
display.debug(f"{self._task.action}: AnsibleConnectionFailure caught and handled: {e}")
shutdown_result["rc"] = 0
if shutdown_result['rc'] != 0:
result['failed'] = True
result['shutdown'] = False
result['msg'] = f"Shutdown command failed. Error was {fmt(shutdown_result, 'stdout')}, {fmt(shutdown_result, 'stderr')}"
if shutdown_result["rc"] != 0:
result["failed"] = True
result["shutdown"] = False
result["msg"] = (
f"Shutdown command failed. Error was {to_native(shutdown_result['stdout'])}, {to_native(shutdown_result['stderr'])}"
)
return result
result['failed'] = False
result['shutdown_command'] = shutdown_command_exec
result["failed"] = False
result["shutdown_command"] = shutdown_command_exec
return result
def run(self, tmp=None, task_vars=None):
def run(self, tmp: str | None = None, task_vars: dict[str, t.Any] | None = None) -> dict[str, t.Any]:
self._supports_check_mode = True
self._supports_async = True
# If running with local connection, fail so we don't shutdown ourself
if self._connection.transport == 'local' and (not self._play_context.check_mode):
msg = f'Running {self._task.action} with local connection would shutdown the control node.'
return {'changed': False, 'elapsed': 0, 'shutdown': False, 'failed': True, 'msg': msg}
if self._connection.transport == "local" and (not self._play_context.check_mode):
msg = f"Running {self._task.action} with local connection would shutdown the control node."
return {"changed": False, "elapsed": 0, "shutdown": False, "failed": True, "msg": msg}
if task_vars is None:
task_vars = {}
result = super(ActionModule, self).run(tmp, task_vars)
result = super().run(tmp, task_vars)
if result.get('skipped', False) or result.get('failed', False):
if result.get("skipped", False) or result.get("failed", False):
return result
distribution = self.get_distribution(task_vars)
@@ -211,12 +209,12 @@ class ActionModule(ActionBase):
# Initiate shutdown
shutdown_result = self.perform_shutdown(task_vars, distribution)
if shutdown_result['failed']:
if shutdown_result["failed"]:
result = shutdown_result
return result
result['shutdown'] = True
result['changed'] = True
result['shutdown_command'] = shutdown_result['shutdown_command']
result["shutdown"] = True
result["changed"] = True
result["shutdown_command"] = shutdown_result["shutdown_command"]
return result

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -83,9 +82,26 @@ options:
- name: ansible_doas_prompt_l10n
env:
- name: ANSIBLE_DOAS_PROMPT_L10N
allow_pipelining:
description:
- When set to V(true), do allow pipelining with ansible-core 2.19+.
- This should only be used when doas is configured to not ask for a password (C(nopass)).
type: boolean
default: false
version_added: 12.4.0
ini:
- section: doas_become_plugin
key: allow_pipelining
vars:
- name: ansible_doas_allow_pipelining
env:
- name: ANSIBLE_DOAS_ALLOW_PIPELINING
notes:
- This become plugin does not work when connection pipelining is enabled. With ansible-core 2.19+, using it automatically
disables pipelining. On ansible-core 2.18 and before, pipelining must explicitly be disabled by the user.
- This become plugin does not work when connection pipelining is enabled
and doas requests a password.
With ansible-core 2.19+, using this plugin automatically disables pipelining,
unless O(allow_pipelining=true) is explicitly set by the user.
On ansible-core 2.18 and before, pipelining must explicitly be disabled by the user.
"""
import re
@@ -95,45 +111,47 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = 'community.general.doas'
name = "community.general.doas"
# messages for detecting prompted password issues
fail = ('Permission denied',)
missing = ('Authorization required',)
fail = ("Permission denied",)
missing = ("Authorization required",)
# See https://github.com/ansible-collections/community.general/issues/9977,
# https://github.com/ansible/ansible/pull/78111
pipelining = False
# https://github.com/ansible/ansible/pull/78111,
# https://github.com/ansible-collections/community.general/issues/11411
@property
def pipelining(self) -> bool: # type: ignore[override]
return self.get_option("allow_pipelining")
def check_password_prompt(self, b_output):
''' checks if the expected password prompt exists in b_output '''
"""checks if the expected password prompt exists in b_output"""
# FIXME: more accurate would be: 'doas (%s@' % remote_user
# however become plugins don't have that information currently
b_prompts = [to_bytes(p) for p in self.get_option('prompt_l10n')] or [br'doas \(', br'Password:']
b_prompts = [to_bytes(p) for p in self.get_option("prompt_l10n")] or [rb"doas \(", rb"Password:"]
b_prompt = b"|".join(b_prompts)
return bool(re.match(b_prompt, b_output))
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
super().build_become_command(cmd, shell)
if not cmd:
return cmd
self.prompt = True
become_exe = self.get_option('become_exe')
become_exe = self.get_option("become_exe")
flags = self.get_option('become_flags')
if not self.get_option('become_pass') and '-n' not in flags:
flags += ' -n'
flags = self.get_option("become_flags")
if not self.get_option("become_pass") and "-n" not in flags:
flags += " -n"
become_user = self.get_option('become_user')
user = f'-u {become_user}' if become_user else ''
become_user = self.get_option("become_user")
user = f"-u {become_user}" if become_user else ""
success_cmd = self._build_success_command(cmd, shell, noexe=True)
executable = getattr(shell, 'executable', shell.SHELL_FAMILY)
executable = getattr(shell, "executable", shell.SHELL_FAMILY)
return f'{become_exe} {flags} {user} {executable} -c {success_cmd}'
return f"{become_exe} {flags} {user} {executable} -c {success_cmd}"

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -75,26 +74,25 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = 'community.general.dzdo'
name = "community.general.dzdo"
# messages for detecting prompted password issues
fail = ('Sorry, try again.',)
fail = ("Sorry, try again.",)
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
super().build_become_command(cmd, shell)
if not cmd:
return cmd
becomecmd = self.get_option('become_exe')
becomecmd = self.get_option("become_exe")
flags = self.get_option('become_flags')
if self.get_option('become_pass'):
self.prompt = f'[dzdo via ansible, key={self._id}] password:'
flags = f"{flags.replace('-n', '')} -p \"{self.prompt}\""
flags = self.get_option("become_flags")
if self.get_option("become_pass"):
self.prompt = f"[dzdo via ansible, key={self._id}] password:"
flags = f'{flags.replace("-n", "")} -p "{self.prompt}"'
become_user = self.get_option('become_user')
user = f'-u {become_user}' if become_user else ''
become_user = self.get_option("become_user")
user = f"-u {become_user}" if become_user else ""
return f"{becomecmd} {flags} {user} {self._build_success_command(cmd, shell)}"

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -93,24 +92,22 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = 'community.general.ksu'
name = "community.general.ksu"
# messages for detecting prompted password issues
fail = ('Password incorrect',)
missing = ('No password given',)
fail = ("Password incorrect",)
missing = ("No password given",)
def check_password_prompt(self, b_output):
''' checks if the expected password prompt exists in b_output '''
"""checks if the expected password prompt exists in b_output"""
prompts = self.get_option('prompt_l10n') or ["Kerberos password for .*@.*:"]
prompts = self.get_option("prompt_l10n") or ["Kerberos password for .*@.*:"]
b_prompt = b"|".join(to_bytes(p) for p in prompts)
return bool(re.match(b_prompt, b_output))
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
super().build_become_command(cmd, shell)
# Prompt handling for ``ksu`` is more complicated, this
# is used to satisfy the connection plugin
@@ -119,8 +116,8 @@ class BecomeModule(BecomeBase):
if not cmd:
return cmd
exe = self.get_option('become_exe')
exe = self.get_option("become_exe")
flags = self.get_option('become_flags')
user = self.get_option('become_user')
return f'{exe} {user} {flags} -e {self._build_success_command(cmd, shell)} '
flags = self.get_option("become_flags")
user = self.get_option("become_user")
return f"{exe} {user} {flags} -e {self._build_success_command(cmd, shell)} "

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -93,20 +92,18 @@ EXAMPLES = r"""
from re import compile as re_compile
from ansible.plugins.become import BecomeBase
from ansible.module_utils.common.text.converters import to_bytes
from ansible.plugins.become import BecomeBase
ansi_color_codes = re_compile(to_bytes(r'\x1B\[[0-9;]+m'))
ansi_color_codes = re_compile(to_bytes(r"\x1B\[[0-9;]+m"))
class BecomeModule(BecomeBase):
name = "community.general.machinectl"
name = 'community.general.machinectl'
prompt = 'Password: '
fail = ('==== AUTHENTICATION FAILED ====',)
success = ('==== AUTHENTICATION COMPLETE ====',)
prompt = "Password: "
fail = ("==== AUTHENTICATION FAILED ====",)
success = ("==== AUTHENTICATION COMPLETE ====",)
require_tty = True # see https://github.com/ansible-collections/community.general/issues/6932
# See https://github.com/ansible/ansible/issues/81254,
@@ -118,16 +115,19 @@ class BecomeModule(BecomeBase):
return ansi_color_codes.sub(b"", line)
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
super().build_become_command(cmd, shell)
if not cmd:
return cmd
become = self.get_option('become_exe')
become = self.get_option("become_exe")
flags = self.get_option('become_flags')
user = self.get_option('become_user')
return f'{become} -q shell {flags} {user}@ {self._build_success_command(cmd, shell)}'
flags = self.get_option("become_flags")
user = self.get_option("become_user")
# SYSTEMD_COLORS=0 stops machinectl from appending ANSI reset
# sequences (ESC[0m, ESC[J) after the child exits, which would
# otherwise land after the module JSON and break result parsing.
return f"SYSTEMD_COLORS=0 {become} -q shell {flags} {user}@ {self._build_success_command(cmd, shell)}"
def check_success(self, b_output):
b_output = self.remove_ansi_codes(b_output)

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -87,22 +86,21 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = "community.general.pbrun"
name = 'community.general.pbrun'
prompt = 'Password:'
prompt = "Password:"
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
super().build_become_command(cmd, shell)
if not cmd:
return cmd
become_exe = self.get_option('become_exe')
become_exe = self.get_option("become_exe")
flags = self.get_option('become_flags')
become_user = self.get_option('become_user')
user = f'-u {become_user}' if become_user else ''
noexe = not self.get_option('wrap_exe')
flags = self.get_option("become_flags")
become_user = self.get_option("become_user")
user = f"-u {become_user}" if become_user else ""
noexe = not self.get_option("wrap_exe")
return f"{become_exe} {flags} {user} {self._build_success_command(cmd, shell, noexe=noexe)}"

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -92,17 +91,16 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = 'community.general.pfexec'
name = "community.general.pfexec"
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
super().build_become_command(cmd, shell)
if not cmd:
return cmd
exe = self.get_option('become_exe')
exe = self.get_option("become_exe")
flags = self.get_option('become_flags')
noexe = not self.get_option('wrap_exe')
return f'{exe} {flags} {self._build_success_command(cmd, shell, noexe=noexe)}'
flags = self.get_option("become_flags")
noexe = not self.get_option("wrap_exe")
return f"{exe} {flags} {self._build_success_command(cmd, shell, noexe=noexe)}"

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -60,21 +59,21 @@ notes:
"""
from shlex import quote as shlex_quote
from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = 'community.general.pmrun'
prompt = 'Enter UPM user password:'
name = "community.general.pmrun"
prompt = "Enter UPM user password:"
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
super().build_become_command(cmd, shell)
if not cmd:
return cmd
become = self.get_option('become_exe')
become = self.get_option("become_exe")
flags = self.get_option('become_flags')
return f'{become} {flags} {shlex_quote(self._build_success_command(cmd, shell))}'
flags = self.get_option("become_flags")
return f"{become} {flags} {shlex_quote(self._build_success_command(cmd, shell))}"

View File

@@ -1,11 +1,9 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2024, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import annotations
DOCUMENTATION = r"""
name: run0
short_description: Systemd's run0
@@ -62,6 +60,8 @@ options:
type: string
notes:
- This plugin only works when a C(polkit) rule is in place.
- This become plugin does not work when connection pipelining is enabled. With ansible-core 2.19+, using it automatically
disables pipelining. On ansible-core 2.18 and before, pipelining must explicitly be disabled by the user.
"""
EXAMPLES = r"""
@@ -79,22 +79,23 @@ EXAMPLES = r"""
from re import compile as re_compile
from ansible.plugins.become import BecomeBase
from ansible.module_utils.common.text.converters import to_bytes
from ansible.plugins.become import BecomeBase
ansi_color_codes = re_compile(to_bytes(r"\x1B\[[0-9;]+m"))
class BecomeModule(BecomeBase):
name = "community.general.run0"
prompt = "Password: "
fail = ("==== AUTHENTICATION FAILED ====",)
success = ("==== AUTHENTICATION COMPLETE ====",)
require_tty = (
True # see https://github.com/ansible-collections/community.general/issues/6932
)
require_tty = True # see https://github.com/ansible-collections/community.general/issues/6932
# See https://github.com/ansible/ansible/issues/81254,
# https://github.com/ansible/ansible/pull/78111
pipelining = False
@staticmethod
def remove_ansi_codes(line):
@@ -110,9 +111,11 @@ class BecomeModule(BecomeBase):
flags = self.get_option("become_flags")
user = self.get_option("become_user")
return (
f"{become} --user={user} {flags} {self._build_success_command(cmd, shell)}"
)
# SYSTEMD_COLORS=0 stops run0 from emitting terminal control
# sequences (window title OSC, ANSI reset) around the child
# command, which would otherwise corrupt the module JSON and
# break result parsing.
return f"SYSTEMD_COLORS=0 {become} --user={user} {flags} {self._build_success_command(cmd, shell)}"
def check_success(self, b_output):
b_output = self.remove_ansi_codes(b_output)

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -76,20 +75,19 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = "community.general.sesu"
name = 'community.general.sesu'
prompt = 'Please enter your password:'
fail = missing = ('Sorry, try again with sesu.',)
prompt = "Please enter your password:"
fail = missing = ("Sorry, try again with sesu.",)
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
super().build_become_command(cmd, shell)
if not cmd:
return cmd
become = self.get_option('become_exe')
become = self.get_option("become_exe")
flags = self.get_option('become_flags')
user = self.get_option('become_user')
return f'{become} {flags} {user} -c {self._build_success_command(cmd, shell)}'
flags = self.get_option("become_flags")
user = self.get_option("become_user")
return f"{become} {flags} {user} -c {self._build_success_command(cmd, shell)}"

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2021, Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -80,34 +79,33 @@ from ansible.plugins.become import BecomeBase
class BecomeModule(BecomeBase):
name = 'community.general.sudosu'
name = "community.general.sudosu"
# messages for detecting prompted password issues
fail = ('Sorry, try again.',)
missing = ('Sorry, a password is required to run sudo', 'sudo: a password is required')
fail = ("Sorry, try again.",)
missing = ("Sorry, a password is required to run sudo", "sudo: a password is required")
def build_become_command(self, cmd, shell):
super(BecomeModule, self).build_become_command(cmd, shell)
super().build_become_command(cmd, shell)
if not cmd:
return cmd
becomecmd = 'sudo'
becomecmd = "sudo"
flags = self.get_option('become_flags') or ''
prompt = ''
if self.get_option('become_pass'):
self.prompt = f'[sudo via ansible, key={self._id}] password:'
flags = self.get_option("become_flags") or ""
prompt = ""
if self.get_option("become_pass"):
self.prompt = f"[sudo via ansible, key={self._id}] password:"
if flags: # this could be simplified, but kept as is for now for backwards string matching
flags = flags.replace('-n', '')
flags = flags.replace("-n", "")
prompt = f'-p "{self.prompt}"'
user = self.get_option('become_user') or ''
user = self.get_option("become_user") or ""
if user:
user = f'{user}'
user = f"{user}"
if self.get_option('alt_method'):
if self.get_option("alt_method"):
return f"{becomecmd} {flags} {prompt} su -l {user} -c {self._build_success_command(cmd, shell, True)}"
else:
return f"{becomecmd} {flags} {prompt} su -l {user} {self._build_success_command(cmd, shell)}"

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2014, Brian Coca, Josh Drake, et al
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -50,16 +49,17 @@ options:
import collections
import os
import time
from multiprocessing import Lock
from collections.abc import MutableSet
from itertools import chain
from multiprocessing import Lock
from ansible.errors import AnsibleError
from collections.abc import MutableSet
from ansible.plugins.cache import BaseCacheModule
from ansible.utils.display import Display
try:
import memcache
HAS_MEMCACHE = True
except ImportError:
HAS_MEMCACHE = False
@@ -67,7 +67,7 @@ except ImportError:
display = Display()
class ProxyClientPool(object):
class ProxyClientPool:
"""
Memcached connection pooling for thread/fork safety. Inspired by py-redis
connection pool.
@@ -76,7 +76,7 @@ class ProxyClientPool(object):
"""
def __init__(self, *args, **kwargs):
self.max_connections = kwargs.pop('max_connections', 1024)
self.max_connections = kwargs.pop("max_connections", 1024)
self.connection_args = args
self.connection_kwargs = kwargs
self.reset()
@@ -124,6 +124,7 @@ class ProxyClientPool(object):
def __getattr__(self, name):
def wrapped(*args, **kwargs):
return self._proxy_client(name, *args, **kwargs)
return wrapped
def _proxy_client(self, name, *args, **kwargs):
@@ -140,7 +141,8 @@ class CacheModuleKeys(MutableSet):
A set subclass that keeps track of insertion time and persists
the set in memcached.
"""
PREFIX = 'ansible_cache_keys'
PREFIX = "ansible_cache_keys"
def __init__(self, cache, *args, **kwargs):
self._cache = cache
@@ -172,15 +174,14 @@ class CacheModuleKeys(MutableSet):
class CacheModule(BaseCacheModule):
def __init__(self, *args, **kwargs):
connection = ['127.0.0.1:11211']
connection = ["127.0.0.1:11211"]
super(CacheModule, self).__init__(*args, **kwargs)
if self.get_option('_uri'):
connection = self.get_option('_uri')
self._timeout = self.get_option('_timeout')
self._prefix = self.get_option('_prefix')
super().__init__(*args, **kwargs)
if self.get_option("_uri"):
connection = self.get_option("_uri")
self._timeout = self.get_option("_timeout")
self._prefix = self.get_option("_prefix")
if not HAS_MEMCACHE:
raise AnsibleError("python-memcached is required for the memcached fact cache")

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017, Brian Coca
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -43,10 +42,7 @@ options:
type: float
"""
try:
import cPickle as pickle
except ImportError:
import pickle
import pickle
from ansible.plugins.cache import BaseFileCacheModule
@@ -55,14 +51,15 @@ class CacheModule(BaseFileCacheModule):
"""
A caching module backed by pickle files.
"""
_persistent = False # prevent unnecessary JSON serialization and key munging
def _load(self, filepath):
# Pickle is a binary format
with open(filepath, 'rb') as f:
return pickle.load(f, encoding='bytes')
with open(filepath, "rb") as f:
return pickle.load(f, encoding="bytes")
def _dump(self, value, filepath):
with open(filepath, 'wb') as f:
with open(filepath, "wb") as f:
# Use pickle protocol 2 which is compatible with Python 2.3+.
pickle.dump(value, f, protocol=2)

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2014, Brian Coca, Josh Drake, et al
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -67,17 +66,18 @@ options:
section: defaults
"""
import json
import re
import time
import json
from ansible.errors import AnsibleError
from ansible.parsing.ajson import AnsibleJSONEncoder, AnsibleJSONDecoder
from ansible.parsing.ajson import AnsibleJSONDecoder, AnsibleJSONEncoder
from ansible.plugins.cache import BaseCacheModule
from ansible.utils.display import Display
try:
from redis import StrictRedis, VERSION
from redis import VERSION, StrictRedis
HAS_REDIS = True
except ImportError:
HAS_REDIS = False
@@ -94,32 +94,35 @@ class CacheModule(BaseCacheModule):
to expire keys. This mechanism is used or a pattern matched 'scan' for
performance.
"""
_sentinel_service_name = None
re_url_conn = re.compile(r'^([^:]+|\[[^]]+\]):(\d+):(\d+)(?::(.*))?$')
re_sent_conn = re.compile(r'^(.*):(\d+)$')
re_url_conn = re.compile(r"^([^:]+|\[[^]]+\]):(\d+):(\d+)(?::(.*))?$")
re_sent_conn = re.compile(r"^(.*):(\d+)$")
def __init__(self, *args, **kwargs):
uri = ''
uri = ""
super(CacheModule, self).__init__(*args, **kwargs)
if self.get_option('_uri'):
uri = self.get_option('_uri')
self._timeout = float(self.get_option('_timeout'))
self._prefix = self.get_option('_prefix')
self._keys_set = self.get_option('_keyset_name')
self._sentinel_service_name = self.get_option('_sentinel_service_name')
super().__init__(*args, **kwargs)
if self.get_option("_uri"):
uri = self.get_option("_uri")
self._timeout = float(self.get_option("_timeout"))
self._prefix = self.get_option("_prefix")
self._keys_set = self.get_option("_keyset_name")
self._sentinel_service_name = self.get_option("_sentinel_service_name")
if not HAS_REDIS:
raise AnsibleError("The 'redis' python module (version 2.4.5 or newer) is required for the redis fact cache, 'pip install redis'")
raise AnsibleError(
"The 'redis' python module (version 2.4.5 or newer) is required for the redis fact cache, 'pip install redis'"
)
self._cache = {}
kw = {}
# tls connection
tlsprefix = 'tls://'
tlsprefix = "tls://"
if uri.startswith(tlsprefix):
kw['ssl'] = True
uri = uri[len(tlsprefix):]
kw["ssl"] = True
uri = uri[len(tlsprefix) :]
# redis sentinel connection
if self._sentinel_service_name:
@@ -129,7 +132,7 @@ class CacheModule(BaseCacheModule):
connection = self._parse_connection(self.re_url_conn, uri)
self._db = StrictRedis(*connection, **kw)
display.vv(f'Redis connection: {self._db}')
display.vv(f"Redis connection: {self._db}")
@staticmethod
def _parse_connection(re_patt, uri):
@@ -144,36 +147,37 @@ class CacheModule(BaseCacheModule):
"""
try:
from redis.sentinel import Sentinel
except ImportError:
raise AnsibleError("The 'redis' python module (version 2.9.0 or newer) is required to use redis sentinel.")
except ImportError as e:
raise AnsibleError(
"The 'redis' python module (version 2.9.0 or newer) is required to use redis sentinel."
) from e
if ';' not in uri:
raise AnsibleError('_uri does not have sentinel syntax.')
if ";" not in uri:
raise AnsibleError("_uri does not have sentinel syntax.")
# format: "localhost:26379;localhost2:26379;0:changeme"
connections = uri.split(';')
connections = uri.split(";")
connection_args = connections.pop(-1)
if len(connection_args) > 0: # handle if no db nr is given
connection_args = connection_args.split(':')
kw['db'] = connection_args.pop(0)
connection_args = connection_args.split(":")
kw["db"] = connection_args.pop(0)
try:
kw['password'] = connection_args.pop(0)
kw["password"] = connection_args.pop(0)
except IndexError:
pass # password is optional
sentinels = [self._parse_connection(self.re_sent_conn, shost) for shost in connections]
display.vv(f'\nUsing redis sentinels: {sentinels}')
display.vv(f"\nUsing redis sentinels: {sentinels}")
scon = Sentinel(sentinels, **kw)
try:
return scon.master_for(self._sentinel_service_name, socket_timeout=0.2)
except Exception as exc:
raise AnsibleError(f'Could not connect to redis sentinel: {exc}')
raise AnsibleError(f"Could not connect to redis sentinel: {exc}") from exc
def _make_key(self, key):
return self._prefix + key
def get(self, key):
if key not in self._cache:
value = self._db.get(self._make_key(key))
# guard against the key not being removed from the zset;
@@ -187,7 +191,6 @@ class CacheModule(BaseCacheModule):
return self._cache.get(key)
def set(self, key, value):
value2 = json.dumps(value, cls=AnsibleJSONEncoder, sort_keys=True, indent=4)
if self._timeout > 0: # a timeout of 0 is handled as meaning 'never expire'
self._db.setex(self._make_key(key), int(self._timeout), value2)
@@ -211,7 +214,7 @@ class CacheModule(BaseCacheModule):
def contains(self, key):
self._expire_keys()
return (self._db.zrank(self._keys_set, key) is not None)
return self._db.zrank(self._keys_set, key) is not None
def delete(self, key):
if key in self._cache:

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017, Brian Coca
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -47,9 +46,8 @@ options:
import os
import yaml
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.parsing.yaml.dumper import AnsibleDumper
from ansible.parsing.yaml.loader import AnsibleLoader
from ansible.plugins.cache import BaseFileCacheModule
@@ -59,9 +57,9 @@ class CacheModule(BaseFileCacheModule):
"""
def _load(self, filepath):
with open(os.path.abspath(filepath), 'r', encoding='utf-8') as f:
with open(os.path.abspath(filepath), encoding="utf-8") as f:
return AnsibleLoader(f).get_single_data()
def _dump(self, value, filepath):
with open(os.path.abspath(filepath), 'w', encoding='utf-8') as f:
with open(os.path.abspath(filepath), "w", encoding="utf-8") as f:
yaml.dump(value, f, Dumper=AnsibleDumper, default_flow_style=False)

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018 Matt Martz <matt@sivel.net>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -42,14 +41,15 @@ options:
key: cur_mem_file
"""
import time
import threading
import time
from ansible.plugins.callback import CallbackBase
class MemProf(threading.Thread):
"""Python thread for recording memory usage"""
def __init__(self, path, obj=None):
threading.Thread.__init__(self)
self.obj = obj
@@ -67,25 +67,25 @@ class MemProf(threading.Thread):
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate'
CALLBACK_NAME = 'community.general.cgroup_memory_recap'
CALLBACK_TYPE = "aggregate"
CALLBACK_NAME = "community.general.cgroup_memory_recap"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super(CallbackModule, self).__init__(display)
super().__init__(display)
self._task_memprof = None
self.task_results = []
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.cgroup_max_file = self.get_option('max_mem_file')
self.cgroup_current_file = self.get_option('cur_mem_file')
self.cgroup_max_file = self.get_option("max_mem_file")
self.cgroup_current_file = self.get_option("cur_mem_file")
with open(self.cgroup_max_file, 'w+') as f:
f.write('0')
with open(self.cgroup_max_file, "w+") as f:
f.write("0")
def _profile_memory(self, obj=None):
prev_task = None
@@ -113,8 +113,8 @@ class CallbackModule(CallbackBase):
with open(self.cgroup_max_file) as f:
max_results = int(f.read().strip()) / 1024 / 1024
self._display.banner('CGROUP MEMORY RECAP')
self._display.display(f'Execution Maximum: {max_results:0.2f}MB\n\n')
self._display.banner("CGROUP MEMORY RECAP")
self._display.display(f"Execution Maximum: {max_results:0.2f}MB\n\n")
for task, memory in self.task_results:
self._display.display(f'{task.get_name()} ({task._uuid}): {memory:0.2f}MB')
self._display.display(f"{task.get_name()} ({task._uuid}): {memory:0.2f}MB")

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -26,13 +25,14 @@ class CallbackModule(CallbackBase):
This is a very trivial example of how any callback function can get at play and task objects.
play will be 'None' for runner invocations, and task will be None for 'setup' invocations.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate'
CALLBACK_NAME = 'community.general.context_demo'
CALLBACK_TYPE = "aggregate"
CALLBACK_NAME = "community.general.context_demo"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, *args, **kwargs):
super(CallbackModule, self).__init__(*args, **kwargs)
super().__init__(*args, **kwargs)
self.task = None
self.play = None
@@ -41,11 +41,11 @@ class CallbackModule(CallbackBase):
self._display.display(" --- ARGS ")
for i, a in enumerate(args):
self._display.display(f' {i}: {a}')
self._display.display(f" {i}: {a}")
self._display.display(" --- KWARGS ")
for k in kwargs:
self._display.display(f' {k}: {kwargs[k]}')
self._display.display(f" {k}: {kwargs[k]}")
def v2_playbook_on_play_start(self, play):
self.play = play

View File

@@ -1,10 +1,9 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Ivan Aragones Muniesa <ivan.aragones.muniesa@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
'''
Counter enabled Ansible callback plugin (See DOCUMENTATION for more information)
'''
"""
Counter enabled Ansible callback plugin (See DOCUMENTATION for more information)
"""
from __future__ import annotations
@@ -24,21 +23,20 @@ requirements:
"""
from ansible import constants as C
from ansible.playbook.task_include import TaskInclude
from ansible.plugins.callback import CallbackBase
from ansible.utils.color import colorize, hostcolor
from ansible.playbook.task_include import TaskInclude
class CallbackModule(CallbackBase):
'''
"""
This is the default callback interface, which simply prints messages
to stdout when new callback events are received.
'''
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.counter_enabled'
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.counter_enabled"
_task_counter = 1
_task_total = 0
@@ -48,7 +46,7 @@ class CallbackModule(CallbackBase):
_previous_batch_total = 0
def __init__(self):
super(CallbackModule, self).__init__()
super().__init__()
self._playbook = ""
self._play = ""
@@ -56,11 +54,7 @@ class CallbackModule(CallbackBase):
def _all_vars(self, host=None, task=None):
# host and task need to be specified in case 'magic variables' (host vars, group vars, etc)
# need to be loaded as well
return self._play.get_variable_manager().get_vars(
play=self._play,
host=host,
task=task
)
return self._play.get_variable_manager().get_vars(play=self._play, host=host, task=task)
def v2_playbook_on_start(self, playbook):
self._playbook = playbook
@@ -68,7 +62,7 @@ class CallbackModule(CallbackBase):
def v2_playbook_on_play_start(self, play):
name = play.get_name().strip()
if not name:
msg = u"play"
msg = "play"
else:
msg = f"PLAY [{name}]"
@@ -78,8 +72,8 @@ class CallbackModule(CallbackBase):
self._play = play
self._previous_batch_total = self._current_batch_total
self._current_batch_total = self._previous_batch_total + len(self._all_vars()['vars']['ansible_play_batch'])
self._host_total = len(self._all_vars()['vars']['ansible_play_hosts_all'])
self._current_batch_total = self._previous_batch_total + len(self._all_vars()["vars"]["ansible_play_batch"])
self._host_total = len(self._all_vars()["vars"]["ansible_play_hosts_all"])
self._task_total = len(self._play.get_tasks()[0])
self._task_counter = 1
@@ -94,39 +88,39 @@ class CallbackModule(CallbackBase):
f"{hostcolor(host, stat)} : {colorize('ok', stat['ok'], C.COLOR_OK)} {colorize('changed', stat['changed'], C.COLOR_CHANGED)} "
f"{colorize('unreachable', stat['unreachable'], C.COLOR_UNREACHABLE)} {colorize('failed', stat['failures'], C.COLOR_ERROR)} "
f"{colorize('rescued', stat['rescued'], C.COLOR_OK)} {colorize('ignored', stat['ignored'], C.COLOR_WARN)}",
screen_only=True
screen_only=True,
)
self._display.display(
f"{hostcolor(host, stat, False)} : {colorize('ok', stat['ok'], None)} {colorize('changed', stat['changed'], None)} "
f"{colorize('unreachable', stat['unreachable'], None)} {colorize('failed', stat['failures'], None)} "
f"{colorize('rescued', stat['rescued'], None)} {colorize('ignored', stat['ignored'], None)}",
log_only=True
log_only=True,
)
self._display.display("", screen_only=True)
# print custom stats
if self._plugin_options.get('show_custom_stats', C.SHOW_CUSTOM_STATS) and stats.custom:
if self._plugin_options.get("show_custom_stats", C.SHOW_CUSTOM_STATS) and stats.custom:
# fallback on constants for inherited plugins missing docs
self._display.banner("CUSTOM STATS: ")
# per host
# TODO: come up with 'pretty format'
for k in sorted(stats.custom.keys()):
if k == '_run':
if k == "_run":
continue
_custom_stats = self._dump_results(stats.custom[k], indent=1).replace('\n', '')
self._display.display(f'\t{k}: {_custom_stats}')
_custom_stats = self._dump_results(stats.custom[k], indent=1).replace("\n", "")
self._display.display(f"\t{k}: {_custom_stats}")
# print per run custom stats
if '_run' in stats.custom:
if "_run" in stats.custom:
self._display.display("", screen_only=True)
_custom_stats_run = self._dump_results(stats.custom['_run'], indent=1).replace('\n', '')
self._display.display(f'\tRUN: {_custom_stats_run}')
_custom_stats_run = self._dump_results(stats.custom["_run"], indent=1).replace("\n", "")
self._display.display(f"\tRUN: {_custom_stats_run}")
self._display.display("", screen_only=True)
def v2_playbook_on_task_start(self, task, is_conditional):
args = ''
args = ""
# args can be specified as no_log in several places: in the task or in
# the argument spec. We can check whether the task is no_log but the
# argument spec can't be because that is only run on the target
@@ -136,8 +130,8 @@ class CallbackModule(CallbackBase):
# that they can secure this if they feel that their stdout is insecure
# (shoulder surfing, logging stdout straight to a file, etc).
if not task.no_log and C.DISPLAY_ARGS_TO_STDOUT:
args = ', '.join(('{k}={v}' for k, v in task.args.items()))
args = f' {args}'
args = ", ".join(("{k}={v}" for k, v in task.args.items()))
args = f" {args}"
self._display.banner(f"TASK {self._task_counter}/{self._task_total} [{task.get_name().strip()}{args}]")
if self._display.verbosity >= 2:
path = task.get_path()
@@ -147,23 +141,24 @@ class CallbackModule(CallbackBase):
self._task_counter += 1
def v2_runner_on_ok(self, result):
self._host_counter += 1
delegated_vars = result._result.get('_ansible_delegated_vars', None)
delegated_vars = result._result.get("_ansible_delegated_vars", None)
if self._play.strategy == 'free' and self._last_task_banner != result._task._uuid:
if self._play.strategy == "free" and self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
if isinstance(result._task, TaskInclude):
return
elif result._result.get('changed', False):
elif result._result.get("changed", False):
if delegated_vars:
msg = f"changed: {self._host_counter}/{self._host_total} [{result._host.get_name()} -> {delegated_vars['ansible_host']}]"
else:
msg = f"changed: {self._host_counter}/{self._host_total} [{result._host.get_name()}]"
color = C.COLOR_CHANGED
else:
if not self._plugin_options.get("display_ok_hosts", True):
return
if delegated_vars:
msg = f"ok: {self._host_counter}/{self._host_total} [{result._host.get_name()} -> {delegated_vars['ansible_host']}]"
else:
@@ -172,7 +167,7 @@ class CallbackModule(CallbackBase):
self._handle_warnings(result._result)
if result._task.loop and 'results' in result._result:
if result._task.loop and "results" in result._result:
self._process_items(result)
else:
self._clean_results(result._result, result._task.action)
@@ -182,19 +177,18 @@ class CallbackModule(CallbackBase):
self._display.display(msg, color=color)
def v2_runner_on_failed(self, result, ignore_errors=False):
self._host_counter += 1
delegated_vars = result._result.get('_ansible_delegated_vars', None)
delegated_vars = result._result.get("_ansible_delegated_vars", None)
self._clean_results(result._result, result._task.action)
if self._play.strategy == 'free' and self._last_task_banner != result._task._uuid:
if self._play.strategy == "free" and self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
self._handle_exception(result._result)
self._handle_warnings(result._result)
if result._task.loop and 'results' in result._result:
if result._task.loop and "results" in result._result:
self._process_items(result)
else:
@@ -202,12 +196,12 @@ class CallbackModule(CallbackBase):
self._display.display(
f"fatal: {self._host_counter}/{self._host_total} [{result._host.get_name()} -> "
f"{delegated_vars['ansible_host']}]: FAILED! => {self._dump_results(result._result)}",
color=C.COLOR_ERROR
color=C.COLOR_ERROR,
)
else:
self._display.display(
f"fatal: {self._host_counter}/{self._host_total} [{result._host.get_name()}]: FAILED! => {self._dump_results(result._result)}",
color=C.COLOR_ERROR
color=C.COLOR_ERROR,
)
if ignore_errors:
@@ -216,14 +210,15 @@ class CallbackModule(CallbackBase):
def v2_runner_on_skipped(self, result):
self._host_counter += 1
if self._plugin_options.get('show_skipped_hosts', C.DISPLAY_SKIPPED_HOSTS): # fallback on constants for inherited plugins missing docs
if self._plugin_options.get(
"show_skipped_hosts", C.DISPLAY_SKIPPED_HOSTS
): # fallback on constants for inherited plugins missing docs
self._clean_results(result._result, result._task.action)
if self._play.strategy == 'free' and self._last_task_banner != result._task._uuid:
if self._play.strategy == "free" and self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
if result._task.loop and 'results' in result._result:
if result._task.loop and "results" in result._result:
self._process_items(result)
else:
msg = f"skipping: {self._host_counter}/{self._host_total} [{result._host.get_name()}]"
@@ -234,18 +229,18 @@ class CallbackModule(CallbackBase):
def v2_runner_on_unreachable(self, result):
self._host_counter += 1
if self._play.strategy == 'free' and self._last_task_banner != result._task._uuid:
if self._play.strategy == "free" and self._last_task_banner != result._task._uuid:
self._print_task_banner(result._task)
delegated_vars = result._result.get('_ansible_delegated_vars', None)
delegated_vars = result._result.get("_ansible_delegated_vars", None)
if delegated_vars:
self._display.display(
f"fatal: {self._host_counter}/{self._host_total} [{result._host.get_name()} -> "
f"{delegated_vars['ansible_host']}]: UNREACHABLE! => {self._dump_results(result._result)}",
color=C.COLOR_UNREACHABLE
color=C.COLOR_UNREACHABLE,
)
else:
self._display.display(
f"fatal: {self._host_counter}/{self._host_total} [{result._host.get_name()}]: UNREACHABLE! => {self._dump_results(result._result)}",
color=C.COLOR_UNREACHABLE
color=C.COLOR_UNREACHABLE,
)

View File

@@ -1,5 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2024, Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -37,8 +35,8 @@ from ansible.plugins.callback.default import CallbackModule as Default
class CallbackModule(Default):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.default_without_diff'
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.default_without_diff"
def v2_on_file_diff(self, result):
pass

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2016, Dag Wieers <dag@wieers.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -23,17 +22,18 @@ requirements:
HAS_OD = False
try:
from collections import OrderedDict
HAS_OD = True
except ImportError:
pass
import sys
from collections.abc import MutableMapping, MutableSequence
from ansible.plugins.callback.default import CallbackModule as CallbackModule_default
from ansible.utils.color import colorize, hostcolor
from ansible.utils.display import Display
import sys
display = Display()
@@ -70,66 +70,66 @@ display = Display()
# FIXME: Importing constants as C simply does not work, beats me :-/
# from ansible import constants as C
class C:
COLOR_HIGHLIGHT = 'white'
COLOR_VERBOSE = 'blue'
COLOR_WARN = 'bright purple'
COLOR_ERROR = 'red'
COLOR_DEBUG = 'dark gray'
COLOR_DEPRECATE = 'purple'
COLOR_SKIP = 'cyan'
COLOR_UNREACHABLE = 'bright red'
COLOR_OK = 'green'
COLOR_CHANGED = 'yellow'
COLOR_HIGHLIGHT = "white"
COLOR_VERBOSE = "blue"
COLOR_WARN = "bright purple"
COLOR_ERROR = "red"
COLOR_DEBUG = "dark gray"
COLOR_DEPRECATE = "purple"
COLOR_SKIP = "cyan"
COLOR_UNREACHABLE = "bright red"
COLOR_OK = "green"
COLOR_CHANGED = "yellow"
# Taken from Dstat
class vt100:
black = '\033[0;30m'
darkred = '\033[0;31m'
darkgreen = '\033[0;32m'
darkyellow = '\033[0;33m'
darkblue = '\033[0;34m'
darkmagenta = '\033[0;35m'
darkcyan = '\033[0;36m'
gray = '\033[0;37m'
black = "\033[0;30m"
darkred = "\033[0;31m"
darkgreen = "\033[0;32m"
darkyellow = "\033[0;33m"
darkblue = "\033[0;34m"
darkmagenta = "\033[0;35m"
darkcyan = "\033[0;36m"
gray = "\033[0;37m"
darkgray = '\033[1;30m'
red = '\033[1;31m'
green = '\033[1;32m'
yellow = '\033[1;33m'
blue = '\033[1;34m'
magenta = '\033[1;35m'
cyan = '\033[1;36m'
white = '\033[1;37m'
darkgray = "\033[1;30m"
red = "\033[1;31m"
green = "\033[1;32m"
yellow = "\033[1;33m"
blue = "\033[1;34m"
magenta = "\033[1;35m"
cyan = "\033[1;36m"
white = "\033[1;37m"
blackbg = '\033[40m'
redbg = '\033[41m'
greenbg = '\033[42m'
yellowbg = '\033[43m'
bluebg = '\033[44m'
magentabg = '\033[45m'
cyanbg = '\033[46m'
whitebg = '\033[47m'
blackbg = "\033[40m"
redbg = "\033[41m"
greenbg = "\033[42m"
yellowbg = "\033[43m"
bluebg = "\033[44m"
magentabg = "\033[45m"
cyanbg = "\033[46m"
whitebg = "\033[47m"
reset = '\033[0;0m'
bold = '\033[1m'
reverse = '\033[2m'
underline = '\033[4m'
reset = "\033[0;0m"
bold = "\033[1m"
reverse = "\033[2m"
underline = "\033[4m"
clear = '\033[2J'
# clearline = '\033[K'
clearline = '\033[2K'
save = '\033[s'
restore = '\033[u'
save_all = '\0337'
restore_all = '\0338'
linewrap = '\033[7h'
nolinewrap = '\033[7l'
clear = "\033[2J"
# clearline = '\033[K'
clearline = "\033[2K"
save = "\033[s"
restore = "\033[u"
save_all = "\0337"
restore_all = "\0338"
linewrap = "\033[7h"
nolinewrap = "\033[7l"
up = '\033[1A'
down = '\033[1B'
right = '\033[1C'
left = '\033[1D'
up = "\033[1A"
down = "\033[1B"
right = "\033[1C"
left = "\033[1D"
colors = dict(
@@ -141,41 +141,38 @@ colors = dict(
unreachable=vt100.red,
)
states = ('skipped', 'ok', 'changed', 'failed', 'unreachable')
states = ("skipped", "ok", "changed", "failed", "unreachable")
class CallbackModule(CallbackModule_default):
'''
"""
This is the dense callback interface, where screen estate is still valued.
'''
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'dense'
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "dense"
def __init__(self):
# From CallbackModule
self._display = display
if HAS_OD:
self.disabled = False
self.super_ref = super(CallbackModule, self)
self.super_ref = super()
self.super_ref.__init__()
# Attributes to remove from results for more density
self.removed_attributes = (
# 'changed',
'delta',
"delta",
# 'diff',
'end',
'failed',
'failed_when_result',
'invocation',
'start',
'stdout_lines',
"end",
"failed",
"failed_when_result",
"invocation",
"start",
"stdout_lines",
)
# Initiate data structures
@@ -183,13 +180,15 @@ class CallbackModule(CallbackModule_default):
self.keep = False
self.shown_title = False
self.count = dict(play=0, handler=0, task=0)
self.type = 'foo'
self.type = "foo"
# Start immediately on the first line
sys.stdout.write(vt100.reset + vt100.save + vt100.clearline)
sys.stdout.flush()
else:
display.warning("The 'dense' callback plugin requires OrderedDict which is not available in this version of python, disabling.")
display.warning(
"The 'dense' callback plugin requires OrderedDict which is not available in this version of python, disabling."
)
self.disabled = True
def __del__(self):
@@ -199,27 +198,27 @@ class CallbackModule(CallbackModule_default):
name = result._host.get_name()
# Add a new status in case a failed task is ignored
if status == 'failed' and result._task.ignore_errors:
status = 'ignored'
if status == "failed" and result._task.ignore_errors:
status = "ignored"
# Check if we have to update an existing state (when looping over items)
if name not in self.hosts:
self.hosts[name] = dict(state=status)
elif states.index(self.hosts[name]['state']) < states.index(status):
self.hosts[name]['state'] = status
elif states.index(self.hosts[name]["state"]) < states.index(status):
self.hosts[name]["state"] = status
# Store delegated hostname, if needed
delegated_vars = result._result.get('_ansible_delegated_vars', None)
delegated_vars = result._result.get("_ansible_delegated_vars", None)
if delegated_vars:
self.hosts[name]['delegate'] = delegated_vars['ansible_host']
self.hosts[name]["delegate"] = delegated_vars["ansible_host"]
# Print progress bar
self._display_progress(result)
# # Ensure that tasks with changes/failures stay on-screen, and during diff-mode
# if status in ['changed', 'failed', 'unreachable'] or (result.get('_diff_mode', False) and result._resultget('diff', False)):
# # Ensure that tasks with changes/failures stay on-screen, and during diff-mode
# if status in ['changed', 'failed', 'unreachable'] or (result.get('_diff_mode', False) and result._resultget('diff', False)):
# Ensure that tasks with changes/failures stay on-screen
if status in ['changed', 'failed', 'unreachable']:
if status in ["changed", "failed", "unreachable"]:
self.keep = True
if self._display.verbosity == 1:
@@ -240,9 +239,9 @@ class CallbackModule(CallbackModule_default):
del result[attr]
def _handle_exceptions(self, result):
if 'exception' in result:
if "exception" in result:
# Remove the exception from the result so it is not shown every time
del result['exception']
del result["exception"]
if self._display.verbosity == 1:
return "An exception occurred during task execution. To see the full traceback, use -vvv."
@@ -250,16 +249,16 @@ class CallbackModule(CallbackModule_default):
def _display_progress(self, result=None):
# Always rewrite the complete line
sys.stdout.write(vt100.restore + vt100.reset + vt100.clearline + vt100.nolinewrap + vt100.underline)
sys.stdout.write(f'{self.type} {self.count[self.type]}:')
sys.stdout.write(f"{self.type} {self.count[self.type]}:")
sys.stdout.write(vt100.reset)
sys.stdout.flush()
# Print out each host in its own status-color
for name in self.hosts:
sys.stdout.write(' ')
if self.hosts[name].get('delegate', None):
sys.stdout.write(" ")
if self.hosts[name].get("delegate", None):
sys.stdout.write(f"{self.hosts[name]['delegate']}>")
sys.stdout.write(colors[self.hosts[name]['state']] + name + vt100.reset)
sys.stdout.write(colors[self.hosts[name]["state"]] + name + vt100.reset)
sys.stdout.flush()
sys.stdout.write(vt100.linewrap)
@@ -268,7 +267,7 @@ class CallbackModule(CallbackModule_default):
if not self.shown_title:
self.shown_title = True
sys.stdout.write(vt100.restore + vt100.reset + vt100.clearline + vt100.underline)
sys.stdout.write(f'{self.type} {self.count[self.type]}: {self.task.get_name().strip()}')
sys.stdout.write(f"{self.type} {self.count[self.type]}: {self.task.get_name().strip()}")
sys.stdout.write(f"{vt100.restore}{vt100.reset}\n{vt100.save}{vt100.clearline}")
sys.stdout.flush()
else:
@@ -285,29 +284,31 @@ class CallbackModule(CallbackModule_default):
self._clean_results(result._result)
dump = ''
if result._task.action == 'include':
dump = ""
if result._task.action == "include":
return
elif status == 'ok':
elif status == "ok":
return
elif status == 'ignored':
elif status == "ignored":
dump = self._handle_exceptions(result._result)
elif status == 'failed':
elif status == "failed":
dump = self._handle_exceptions(result._result)
elif status == 'unreachable':
dump = result._result['msg']
elif status == "unreachable":
dump = result._result["msg"]
if not dump:
dump = self._dump_results(result._result)
if result._task.loop and 'results' in result._result:
if result._task.loop and "results" in result._result:
self._process_items(result)
else:
sys.stdout.write(f"{colors[status] + status}: ")
delegated_vars = result._result.get('_ansible_delegated_vars', None)
delegated_vars = result._result.get("_ansible_delegated_vars", None)
if delegated_vars:
sys.stdout.write(f"{vt100.reset}{result._host.get_name()}>{colors[status]}{delegated_vars['ansible_host']}")
sys.stdout.write(
f"{vt100.reset}{result._host.get_name()}>{colors[status]}{delegated_vars['ansible_host']}"
)
else:
sys.stdout.write(result._host.get_name())
@@ -315,7 +316,7 @@ class CallbackModule(CallbackModule_default):
sys.stdout.write(f"{vt100.reset}{vt100.save}{vt100.clearline}")
sys.stdout.flush()
if status == 'changed':
if status == "changed":
self._handle_warnings(result._result)
def v2_playbook_on_play_start(self, play):
@@ -328,13 +329,13 @@ class CallbackModule(CallbackModule_default):
# Reset at the start of each play
self.keep = False
self.count.update(dict(handler=0, task=0))
self.count['play'] += 1
self.count["play"] += 1
self.play = play
# Write the next play on screen IN UPPERCASE, and make it permanent
name = play.get_name().strip()
if not name:
name = 'unnamed'
name = "unnamed"
sys.stdout.write(f"PLAY {self.count['play']}: {name.upper()}")
sys.stdout.write(f"{vt100.restore}{vt100.reset}\n{vt100.save}{vt100.clearline}")
sys.stdout.flush()
@@ -352,14 +353,14 @@ class CallbackModule(CallbackModule_default):
self.shown_title = False
self.hosts = OrderedDict()
self.task = task
self.type = 'task'
self.type = "task"
# Enumerate task if not setup (task names are too long for dense output)
if task.get_name() != 'setup':
self.count['task'] += 1
if task.get_name() != "setup":
self.count["task"] += 1
# Write the next task on screen (behind the prompt is the previous output)
sys.stdout.write(f'{self.type} {self.count[self.type]}.')
sys.stdout.write(f"{self.type} {self.count[self.type]}.")
sys.stdout.write(vt100.reset)
sys.stdout.flush()
@@ -375,36 +376,36 @@ class CallbackModule(CallbackModule_default):
self.shown_title = False
self.hosts = OrderedDict()
self.task = task
self.type = 'handler'
self.type = "handler"
# Enumerate handler if not setup (handler names may be too long for dense output)
if task.get_name() != 'setup':
if task.get_name() != "setup":
self.count[self.type] += 1
# Write the next task on screen (behind the prompt is the previous output)
sys.stdout.write(f'{self.type} {self.count[self.type]}.')
sys.stdout.write(f"{self.type} {self.count[self.type]}.")
sys.stdout.write(vt100.reset)
sys.stdout.flush()
def v2_playbook_on_cleanup_task_start(self, task):
# TBD
sys.stdout.write('cleanup.')
sys.stdout.write("cleanup.")
sys.stdout.flush()
def v2_runner_on_failed(self, result, ignore_errors=False):
self._add_host(result, 'failed')
self._add_host(result, "failed")
def v2_runner_on_ok(self, result):
if result._result.get('changed', False):
self._add_host(result, 'changed')
if result._result.get("changed", False):
self._add_host(result, "changed")
else:
self._add_host(result, 'ok')
self._add_host(result, "ok")
def v2_runner_on_skipped(self, result):
self._add_host(result, 'skipped')
self._add_host(result, "skipped")
def v2_runner_on_unreachable(self, result):
self._add_host(result, 'unreachable')
self._add_host(result, "unreachable")
def v2_runner_on_include(self, included_file):
pass
@@ -424,24 +425,24 @@ class CallbackModule(CallbackModule_default):
self.v2_runner_item_on_ok(result)
def v2_runner_item_on_ok(self, result):
if result._result.get('changed', False):
self._add_host(result, 'changed')
if result._result.get("changed", False):
self._add_host(result, "changed")
else:
self._add_host(result, 'ok')
self._add_host(result, "ok")
# Old definition in v2.0
def v2_playbook_item_on_failed(self, result):
self.v2_runner_item_on_failed(result)
def v2_runner_item_on_failed(self, result):
self._add_host(result, 'failed')
self._add_host(result, "failed")
# Old definition in v2.0
def v2_playbook_item_on_skipped(self, result):
self.v2_runner_item_on_skipped(result)
def v2_runner_item_on_skipped(self, result):
self._add_host(result, 'skipped')
self._add_host(result, "skipped")
def v2_playbook_on_no_hosts_remaining(self):
if self._display.verbosity == 0 and self.keep:
@@ -468,7 +469,7 @@ class CallbackModule(CallbackModule_default):
return
sys.stdout.write(vt100.bold + vt100.underline)
sys.stdout.write('SUMMARY')
sys.stdout.write("SUMMARY")
sys.stdout.write(f"{vt100.restore}{vt100.reset}\n{vt100.save}{vt100.clearline}")
sys.stdout.flush()
@@ -480,10 +481,10 @@ class CallbackModule(CallbackModule_default):
f"{hostcolor(h, t)} : {colorize('ok', t['ok'], C.COLOR_OK)} {colorize('changed', t['changed'], C.COLOR_CHANGED)} "
f"{colorize('unreachable', t['unreachable'], C.COLOR_UNREACHABLE)} {colorize('failed', t['failures'], C.COLOR_ERROR)} "
f"{colorize('rescued', t['rescued'], C.COLOR_OK)} {colorize('ignored', t['ignored'], C.COLOR_WARN)}",
screen_only=True
screen_only=True,
)
# When using -vv or higher, simply do the default action
if display.verbosity >= 2 or not HAS_OD:
CallbackModule = CallbackModule_default
CallbackModule = CallbackModule_default # type: ignore

View File

@@ -1,5 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2019, Trevor Highfill <trevor.highfill@outlook.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -780,19 +778,21 @@ playbook.yml: >-
import sys
from contextlib import contextmanager
from ansible.module_utils.common.text.converters import to_text
from ansible.plugins.callback.default import CallbackModule as Default
from ansible.template import Templar
from ansible.vars.manager import VariableManager
from ansible.plugins.callback.default import CallbackModule as Default
from ansible.module_utils.common.text.converters import to_text
try:
from ansible.template import trust_as_template # noqa: F401, pylint: disable=unused-import
SUPPORTS_DATA_TAGGING = True
except ImportError:
SUPPORTS_DATA_TAGGING = False
class DummyStdout(object):
class DummyStdout:
def flush(self):
pass
@@ -807,11 +807,12 @@ class CallbackModule(Default):
"""
Callback plugin that allows you to supply your own custom callback templates to be output.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.diy'
DIY_NS = 'ansible_callback_diy'
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.diy"
DIY_NS = "ansible_callback_diy"
@contextmanager
def _suppress_stdout(self, enabled):
@@ -824,50 +825,48 @@ class CallbackModule(Default):
def _get_output_specification(self, loader, variables):
_ret = {}
_calling_method = sys._getframe(1).f_code.co_name
_callback_type = (_calling_method[3:] if _calling_method[:3] == "v2_" else _calling_method)
_callback_options = ['msg', 'msg_color']
_callback_type = _calling_method[3:] if _calling_method[:3] == "v2_" else _calling_method
_callback_options = ["msg", "msg_color"]
for option in _callback_options:
_option_name = f'{_callback_type}_{option}'
_option_template = variables.get(
f"{self.DIY_NS}_{_option_name}",
self.get_option(_option_name)
)
_ret.update({option: self._template(
loader=loader,
template=_option_template,
variables=variables
)})
_option_name = f"{_callback_type}_{option}"
_option_template = variables.get(f"{self.DIY_NS}_{_option_name}", self.get_option(_option_name))
_ret.update({option: self._template(loader=loader, template=_option_template, variables=variables)})
_ret.update({'vars': variables})
_ret.update({"vars": variables})
return _ret
def _using_diy(self, spec):
sentinel = object()
omit = spec['vars'].get('omit', sentinel)
omit = spec["vars"].get("omit", sentinel)
# With Data Tagging, omit is sentinel
return (spec['msg'] is not None) and (spec['msg'] != omit or omit is sentinel)
return (spec["msg"] is not None) and (spec["msg"] != omit or omit is sentinel)
def _parent_has_callback(self):
return hasattr(super(CallbackModule, self), sys._getframe(1).f_code.co_name)
return hasattr(super(), sys._getframe(1).f_code.co_name)
def _template(self, loader, template, variables):
_templar = Templar(loader=loader, variables=variables)
return _templar.template(
template,
preserve_trailing_newlines=True,
convert_data=False,
escape_backslashes=True
)
return _templar.template(template, preserve_trailing_newlines=True, convert_data=False, escape_backslashes=True)
def _output(self, spec, stderr=False):
_msg = to_text(spec['msg'])
_msg = to_text(spec["msg"])
if len(_msg) > 0:
self._display.display(msg=_msg, color=spec['msg_color'], stderr=stderr)
self._display.display(msg=_msg, color=spec["msg_color"], stderr=stderr)
def _get_vars(self, playbook, play=None, host=None, task=None, included_file=None,
handler=None, result=None, stats=None, remove_attr_ref_loop=True):
def _get_vars(
self,
playbook,
play=None,
host=None,
task=None,
included_file=None,
handler=None,
result=None,
stats=None,
remove_attr_ref_loop=True,
):
def _get_value(obj, attr=None, method=None):
if attr:
return getattr(obj, attr, getattr(obj, f"_{attr}", None))
@@ -877,8 +876,8 @@ class CallbackModule(Default):
return _method()
def _remove_attr_ref_loop(obj, attributes):
_loop_var = getattr(obj, 'loop_control', None)
_loop_var = (_loop_var or 'item')
_loop_var = getattr(obj, "loop_control", None)
_loop_var = _loop_var or "item"
for attr in attributes:
if str(_loop_var) in str(_get_value(obj=obj, attr=attr)):
@@ -897,56 +896,128 @@ class CallbackModule(Default):
_all = _variable_manager.get_vars()
if play:
_all = play.get_variable_manager().get_vars(
play=play,
host=(host if host else getattr(result, '_host', None)),
task=(handler if handler else task)
play=play, host=(host if host else getattr(result, "_host", None)), task=(handler if handler else task)
)
_ret.update(_all)
_ret.update(_ret.get(self.DIY_NS, {self.DIY_NS: {} if SUPPORTS_DATA_TAGGING else CallbackDIYDict()}))
_ret[self.DIY_NS].update({'playbook': {}})
_playbook_attributes = ['entries', 'file_name', 'basedir']
_ret[self.DIY_NS].update({"playbook": {}})
_playbook_attributes = ["entries", "file_name", "basedir"]
for attr in _playbook_attributes:
_ret[self.DIY_NS]['playbook'].update({attr: _get_value(obj=playbook, attr=attr)})
_ret[self.DIY_NS]["playbook"].update({attr: _get_value(obj=playbook, attr=attr)})
if play:
_ret[self.DIY_NS].update({'play': {}})
_play_attributes = ['any_errors_fatal', 'become', 'become_flags', 'become_method',
'become_user', 'check_mode', 'collections', 'connection',
'debugger', 'diff', 'environment', 'fact_path', 'finalized',
'force_handlers', 'gather_facts', 'gather_subset',
'gather_timeout', 'handlers', 'hosts', 'ignore_errors',
'ignore_unreachable', 'included_conditional', 'included_path',
'max_fail_percentage', 'module_defaults', 'name', 'no_log',
'only_tags', 'order', 'port', 'post_tasks', 'pre_tasks',
'remote_user', 'removed_hosts', 'roles', 'run_once', 'serial',
'skip_tags', 'squashed', 'strategy', 'tags', 'tasks', 'uuid',
'validated', 'vars_files', 'vars_prompt']
_ret[self.DIY_NS].update({"play": {}})
_play_attributes = [
"any_errors_fatal",
"become",
"become_flags",
"become_method",
"become_user",
"check_mode",
"collections",
"connection",
"debugger",
"diff",
"environment",
"fact_path",
"finalized",
"force_handlers",
"gather_facts",
"gather_subset",
"gather_timeout",
"handlers",
"hosts",
"ignore_errors",
"ignore_unreachable",
"included_conditional",
"included_path",
"max_fail_percentage",
"module_defaults",
"name",
"no_log",
"only_tags",
"order",
"port",
"post_tasks",
"pre_tasks",
"remote_user",
"removed_hosts",
"roles",
"run_once",
"serial",
"skip_tags",
"squashed",
"strategy",
"tags",
"tasks",
"uuid",
"validated",
"vars_files",
"vars_prompt",
]
for attr in _play_attributes:
_ret[self.DIY_NS]['play'].update({attr: _get_value(obj=play, attr=attr)})
_ret[self.DIY_NS]["play"].update({attr: _get_value(obj=play, attr=attr)})
if host:
_ret[self.DIY_NS].update({'host': {}})
_host_attributes = ['name', 'uuid', 'address', 'implicit']
_ret[self.DIY_NS].update({"host": {}})
_host_attributes = ["name", "uuid", "address", "implicit"]
for attr in _host_attributes:
_ret[self.DIY_NS]['host'].update({attr: _get_value(obj=host, attr=attr)})
_ret[self.DIY_NS]["host"].update({attr: _get_value(obj=host, attr=attr)})
if task:
_ret[self.DIY_NS].update({'task': {}})
_task_attributes = ['action', 'any_errors_fatal', 'args', 'async', 'async_val',
'become', 'become_flags', 'become_method', 'become_user',
'changed_when', 'check_mode', 'collections', 'connection',
'debugger', 'delay', 'delegate_facts', 'delegate_to', 'diff',
'environment', 'failed_when', 'finalized', 'ignore_errors',
'ignore_unreachable', 'loop', 'loop_control', 'loop_with',
'module_defaults', 'name', 'no_log', 'notify', 'parent', 'poll',
'port', 'register', 'remote_user', 'retries', 'role', 'run_once',
'squashed', 'tags', 'untagged', 'until', 'uuid', 'validated',
'when']
_ret[self.DIY_NS].update({"task": {}})
_task_attributes = [
"action",
"any_errors_fatal",
"args",
"async",
"async_val",
"become",
"become_flags",
"become_method",
"become_user",
"changed_when",
"check_mode",
"collections",
"connection",
"debugger",
"delay",
"delegate_facts",
"delegate_to",
"diff",
"environment",
"failed_when",
"finalized",
"ignore_errors",
"ignore_unreachable",
"loop",
"loop_control",
"loop_with",
"module_defaults",
"name",
"no_log",
"notify",
"parent",
"poll",
"port",
"register",
"remote_user",
"retries",
"role",
"run_once",
"squashed",
"tags",
"untagged",
"until",
"uuid",
"validated",
"when",
]
# remove arguments that reference a loop var because they cause templating issues in
# callbacks that do not have the loop context(e.g. playbook_on_task_start)
@@ -954,91 +1025,128 @@ class CallbackModule(Default):
_task_attributes = _remove_attr_ref_loop(obj=task, attributes=_task_attributes)
for attr in _task_attributes:
_ret[self.DIY_NS]['task'].update({attr: _get_value(obj=task, attr=attr)})
_ret[self.DIY_NS]["task"].update({attr: _get_value(obj=task, attr=attr)})
if included_file:
_ret[self.DIY_NS].update({'included_file': {}})
_included_file_attributes = ['args', 'filename', 'hosts', 'is_role', 'task']
_ret[self.DIY_NS].update({"included_file": {}})
_included_file_attributes = ["args", "filename", "hosts", "is_role", "task"]
for attr in _included_file_attributes:
_ret[self.DIY_NS]['included_file'].update({attr: _get_value(
obj=included_file,
attr=attr
)})
_ret[self.DIY_NS]["included_file"].update({attr: _get_value(obj=included_file, attr=attr)})
if handler:
_ret[self.DIY_NS].update({'handler': {}})
_handler_attributes = ['action', 'any_errors_fatal', 'args', 'async', 'async_val',
'become', 'become_flags', 'become_method', 'become_user',
'changed_when', 'check_mode', 'collections', 'connection',
'debugger', 'delay', 'delegate_facts', 'delegate_to', 'diff',
'environment', 'failed_when', 'finalized', 'ignore_errors',
'ignore_unreachable', 'listen', 'loop', 'loop_control',
'loop_with', 'module_defaults', 'name', 'no_log',
'notified_hosts', 'notify', 'parent', 'poll', 'port',
'register', 'remote_user', 'retries', 'role', 'run_once',
'squashed', 'tags', 'untagged', 'until', 'uuid', 'validated',
'when']
_ret[self.DIY_NS].update({"handler": {}})
_handler_attributes = [
"action",
"any_errors_fatal",
"args",
"async",
"async_val",
"become",
"become_flags",
"become_method",
"become_user",
"changed_when",
"check_mode",
"collections",
"connection",
"debugger",
"delay",
"delegate_facts",
"delegate_to",
"diff",
"environment",
"failed_when",
"finalized",
"ignore_errors",
"ignore_unreachable",
"listen",
"loop",
"loop_control",
"loop_with",
"module_defaults",
"name",
"no_log",
"notified_hosts",
"notify",
"parent",
"poll",
"port",
"register",
"remote_user",
"retries",
"role",
"run_once",
"squashed",
"tags",
"untagged",
"until",
"uuid",
"validated",
"when",
]
if handler.loop and remove_attr_ref_loop:
_handler_attributes = _remove_attr_ref_loop(obj=handler,
attributes=_handler_attributes)
_handler_attributes = _remove_attr_ref_loop(obj=handler, attributes=_handler_attributes)
for attr in _handler_attributes:
_ret[self.DIY_NS]['handler'].update({attr: _get_value(obj=handler, attr=attr)})
_ret[self.DIY_NS]["handler"].update({attr: _get_value(obj=handler, attr=attr)})
_ret[self.DIY_NS]['handler'].update({'is_host_notified': handler.is_host_notified(host)})
_ret[self.DIY_NS]["handler"].update({"is_host_notified": handler.is_host_notified(host)})
if result:
_ret[self.DIY_NS].update({'result': {}})
_result_attributes = ['host', 'task', 'task_name']
_ret[self.DIY_NS].update({"result": {}})
_result_attributes = ["host", "task", "task_name"]
for attr in _result_attributes:
_ret[self.DIY_NS]['result'].update({attr: _get_value(obj=result, attr=attr)})
_ret[self.DIY_NS]["result"].update({attr: _get_value(obj=result, attr=attr)})
_result_methods = ['is_changed', 'is_failed', 'is_skipped', 'is_unreachable']
_result_methods = ["is_changed", "is_failed", "is_skipped", "is_unreachable"]
for method in _result_methods:
_ret[self.DIY_NS]['result'].update({method: _get_value(obj=result, method=method)})
_ret[self.DIY_NS]["result"].update({method: _get_value(obj=result, method=method)})
_ret[self.DIY_NS]['result'].update({'output': getattr(result, '_result', None)})
_ret[self.DIY_NS]["result"].update({"output": getattr(result, "_result", None)})
_ret.update(result._result)
if stats:
_ret[self.DIY_NS].update({'stats': {}})
_stats_attributes = ['changed', 'custom', 'dark', 'failures', 'ignored',
'ok', 'processed', 'rescued', 'skipped']
_ret[self.DIY_NS].update({"stats": {}})
_stats_attributes = [
"changed",
"custom",
"dark",
"failures",
"ignored",
"ok",
"processed",
"rescued",
"skipped",
]
for attr in _stats_attributes:
_ret[self.DIY_NS]['stats'].update({attr: _get_value(obj=stats, attr=attr)})
_ret[self.DIY_NS]["stats"].update({attr: _get_value(obj=stats, attr=attr)})
_ret[self.DIY_NS].update({'top_level_var_names': list(_ret.keys())})
_ret[self.DIY_NS].update({"top_level_var_names": list(_ret.keys())})
return _ret
def v2_on_any(self, *args, **kwargs):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._diy_spec['vars']
)
self._diy_spec = self._get_output_specification(loader=self._diy_loader, variables=self._diy_spec["vars"])
if self._using_diy(spec=self._diy_spec):
self._output(spec=self._diy_spec)
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_on_any(*args, **kwargs)
super().v2_on_any(*args, **kwargs)
def v2_runner_on_failed(self, result, ignore_errors=False):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1046,17 +1154,14 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_runner_on_failed(result, ignore_errors)
super().v2_runner_on_failed(result, ignore_errors)
def v2_runner_on_ok(self, result):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1064,17 +1169,14 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_runner_on_ok(result)
super().v2_runner_on_ok(result)
def v2_runner_on_skipped(self, result):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1082,17 +1184,14 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_runner_on_skipped(result)
super().v2_runner_on_skipped(result)
def v2_runner_on_unreachable(self, result):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1100,7 +1199,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_runner_on_unreachable(result)
super().v2_runner_on_unreachable(result)
# not implemented as the call to this is not implemented yet
def v2_runner_on_async_poll(self, result):
@@ -1122,8 +1221,8 @@ class CallbackModule(Default):
play=self._diy_play,
task=self._diy_task,
result=result,
remove_attr_ref_loop=False
)
remove_attr_ref_loop=False,
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1131,7 +1230,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_runner_item_on_ok(result)
super().v2_runner_item_on_ok(result)
def v2_runner_item_on_failed(self, result):
self._diy_spec = self._get_output_specification(
@@ -1141,8 +1240,8 @@ class CallbackModule(Default):
play=self._diy_play,
task=self._diy_task,
result=result,
remove_attr_ref_loop=False
)
remove_attr_ref_loop=False,
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1150,7 +1249,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_runner_item_on_failed(result)
super().v2_runner_item_on_failed(result)
def v2_runner_item_on_skipped(self, result):
self._diy_spec = self._get_output_specification(
@@ -1160,8 +1259,8 @@ class CallbackModule(Default):
play=self._diy_play,
task=self._diy_task,
result=result,
remove_attr_ref_loop=False
)
remove_attr_ref_loop=False,
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1169,17 +1268,14 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_runner_item_on_skipped(result)
super().v2_runner_item_on_skipped(result)
def v2_runner_retry(self, result):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1187,7 +1283,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_runner_retry(result)
super().v2_runner_retry(result)
def v2_runner_on_start(self, host, task):
self._diy_host = host
@@ -1196,11 +1292,8 @@ class CallbackModule(Default):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
host=self._diy_host,
task=self._diy_task
)
playbook=self._diy_playbook, play=self._diy_play, host=self._diy_host, task=self._diy_task
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1208,17 +1301,14 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_runner_on_start(host, task)
super().v2_runner_on_start(host, task)
def v2_playbook_on_start(self, playbook):
self._diy_playbook = playbook
self._diy_loader = self._diy_playbook.get_loader()
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook
)
loader=self._diy_loader, variables=self._get_vars(playbook=self._diy_playbook)
)
if self._using_diy(spec=self._diy_spec):
@@ -1226,7 +1316,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_playbook_on_start(playbook)
super().v2_playbook_on_start(playbook)
def v2_playbook_on_notify(self, handler, host):
self._diy_handler = handler
@@ -1235,11 +1325,8 @@ class CallbackModule(Default):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
host=self._diy_host,
handler=self._diy_handler
)
playbook=self._diy_playbook, play=self._diy_play, host=self._diy_host, handler=self._diy_handler
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1247,44 +1334,34 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_playbook_on_notify(handler, host)
super().v2_playbook_on_notify(handler, host)
def v2_playbook_on_no_hosts_matched(self):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._diy_spec['vars']
)
self._diy_spec = self._get_output_specification(loader=self._diy_loader, variables=self._diy_spec["vars"])
if self._using_diy(spec=self._diy_spec):
self._output(spec=self._diy_spec)
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_playbook_on_no_hosts_matched()
super().v2_playbook_on_no_hosts_matched()
def v2_playbook_on_no_hosts_remaining(self):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._diy_spec['vars']
)
self._diy_spec = self._get_output_specification(loader=self._diy_loader, variables=self._diy_spec["vars"])
if self._using_diy(spec=self._diy_spec):
self._output(spec=self._diy_spec)
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_playbook_on_no_hosts_remaining()
super().v2_playbook_on_no_hosts_remaining()
def v2_playbook_on_task_start(self, task, is_conditional):
self._diy_task = task
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task
)
variables=self._get_vars(playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task),
)
if self._using_diy(spec=self._diy_spec):
@@ -1292,7 +1369,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_playbook_on_task_start(task, is_conditional)
super().v2_playbook_on_task_start(task, is_conditional)
# not implemented as the call to this is not implemented yet
def v2_playbook_on_cleanup_task_start(self, task):
@@ -1303,11 +1380,7 @@ class CallbackModule(Default):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task
)
variables=self._get_vars(playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task),
)
if self._using_diy(spec=self._diy_spec):
@@ -1315,25 +1388,29 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_playbook_on_handler_task_start(task)
super().v2_playbook_on_handler_task_start(task)
def v2_playbook_on_vars_prompt(self, varname, private=True, prompt=None, encrypt=None,
confirm=False, salt_size=None, salt=None, default=None,
unsafe=None):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._diy_spec['vars']
)
def v2_playbook_on_vars_prompt(
self,
varname,
private=True,
prompt=None,
encrypt=None,
confirm=False,
salt_size=None,
salt=None,
default=None,
unsafe=None,
):
self._diy_spec = self._get_output_specification(loader=self._diy_loader, variables=self._diy_spec["vars"])
if self._using_diy(spec=self._diy_spec):
self._output(spec=self._diy_spec)
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_playbook_on_vars_prompt(
varname, private, prompt, encrypt,
confirm, salt_size, salt, default,
unsafe
super().v2_playbook_on_vars_prompt(
varname, private, prompt, encrypt, confirm, salt_size, salt, default, unsafe
)
# not implemented as the call to this is not implemented yet
@@ -1348,11 +1425,7 @@ class CallbackModule(Default):
self._diy_play = play
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play
)
loader=self._diy_loader, variables=self._get_vars(playbook=self._diy_playbook, play=self._diy_play)
)
if self._using_diy(spec=self._diy_spec):
@@ -1360,18 +1433,14 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_playbook_on_play_start(play)
super().v2_playbook_on_play_start(play)
def v2_playbook_on_stats(self, stats):
self._diy_stats = stats
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
stats=self._diy_stats
)
variables=self._get_vars(playbook=self._diy_playbook, play=self._diy_play, stats=self._diy_stats),
)
if self._using_diy(spec=self._diy_spec):
@@ -1379,7 +1448,7 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_playbook_on_stats(stats)
super().v2_playbook_on_stats(stats)
def v2_playbook_on_include(self, included_file):
self._diy_included_file = included_file
@@ -1390,8 +1459,8 @@ class CallbackModule(Default):
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_included_file._task,
included_file=self._diy_included_file
)
included_file=self._diy_included_file,
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1399,17 +1468,14 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_playbook_on_include(included_file)
super().v2_playbook_on_include(included_file)
def v2_on_file_diff(self, result):
self._diy_spec = self._get_output_specification(
loader=self._diy_loader,
variables=self._get_vars(
playbook=self._diy_playbook,
play=self._diy_play,
task=self._diy_task,
result=result
)
playbook=self._diy_playbook, play=self._diy_play, task=self._diy_task, result=result
),
)
if self._using_diy(spec=self._diy_spec):
@@ -1417,4 +1483,4 @@ class CallbackModule(Default):
if self._parent_has_callback():
with self._suppress_stdout(enabled=self._using_diy(spec=self._diy_spec)):
super(CallbackModule, self).v2_on_file_diff(result)
super().v2_on_file_diff(result)

View File

@@ -81,7 +81,6 @@ import getpass
import socket
import time
import uuid
from collections import OrderedDict
from contextlib import closing
from os.path import basename
@@ -90,8 +89,9 @@ from ansible.errors import AnsibleError, AnsibleRuntimeError
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.plugins.callback import CallbackBase
ELASTIC_LIBRARY_IMPORT_ERROR: ImportError | None
try:
from elasticapm import Client, capture_span, trace_parent_from_string, instrument, label
from elasticapm import Client, capture_span, instrument, label, trace_parent_from_string
except ImportError as imp_exc:
ELASTIC_LIBRARY_IMPORT_ERROR = imp_exc
else:
@@ -115,9 +115,9 @@ class TaskData:
def add_host(self, host):
if host.uuid in self.host_data:
if host.status == 'included':
if host.status == "included":
# concatenate task include output from multiple items
host.result = f'{self.host_data[host.uuid].result}\n{host.result}'
host.result = f"{self.host_data[host.uuid].result}\n{host.result}"
else:
return
@@ -137,21 +137,21 @@ class HostData:
self.finish = time.time()
class ElasticSource(object):
class ElasticSource:
def __init__(self, display):
self.ansible_playbook = ""
self.session = str(uuid.uuid4())
self.host = socket.gethostname()
try:
self.ip_address = socket.gethostbyname(socket.gethostname())
except Exception as e:
except Exception:
self.ip_address = None
self.user = getpass.getuser()
self._display = display
def start_task(self, tasks_data, hide_task_arguments, play_name, task):
""" record the start of a task for one or more hosts """
"""record the start of a task for one or more hosts"""
uuid = task._uuid
@@ -164,38 +164,50 @@ class ElasticSource(object):
args = None
if not task.no_log and not hide_task_arguments:
args = ', '.join((f'{k}={v}' for k, v in task.args.items()))
args = ", ".join((f"{k}={v}" for k, v in task.args.items()))
tasks_data[uuid] = TaskData(uuid, name, path, play_name, action, args)
def finish_task(self, tasks_data, status, result):
""" record the results of a task for a single host """
"""record the results of a task for a single host"""
task_uuid = result._task._uuid
if hasattr(result, '_host') and result._host is not None:
if hasattr(result, "_host") and result._host is not None:
host_uuid = result._host._uuid
host_name = result._host.name
else:
host_uuid = 'include'
host_name = 'include'
host_uuid = "include"
host_name = "include"
task = tasks_data[task_uuid]
task.add_host(HostData(host_uuid, host_name, status, result))
def generate_distributed_traces(self, tasks_data, status, end_time, traceparent, apm_service_name,
apm_server_url, apm_verify_server_cert, apm_secret_token, apm_api_key):
""" generate distributed traces from the collected TaskData and HostData """
def generate_distributed_traces(
self,
tasks_data,
status,
end_time,
traceparent,
apm_service_name,
apm_server_url,
apm_verify_server_cert,
apm_secret_token,
apm_api_key,
):
"""generate distributed traces from the collected TaskData and HostData"""
tasks = []
parent_start_time = None
for task_uuid, task in tasks_data.items():
for task in tasks_data.values():
if parent_start_time is None:
parent_start_time = task.start
tasks.append(task)
apm_cli = self.init_apm_client(apm_server_url, apm_service_name, apm_verify_server_cert, apm_secret_token, apm_api_key)
apm_cli = self.init_apm_client(
apm_server_url, apm_service_name, apm_verify_server_cert, apm_secret_token, apm_api_key
)
if apm_cli:
with closing(apm_cli):
instrument() # Only call this once, as early as possible.
@@ -211,78 +223,86 @@ class ElasticSource(object):
label(ansible_host_ip=self.ip_address)
for task_data in tasks:
for host_uuid, host_data in task_data.host_data.items():
for host_data in task_data.host_data.values():
self.create_span_data(apm_cli, task_data, host_data)
apm_cli.end_transaction(name=__name__, result=status, duration=end_time - parent_start_time)
def create_span_data(self, apm_cli, task_data, host_data):
""" create the span with the given TaskData and HostData """
"""create the span with the given TaskData and HostData"""
name = f'[{host_data.name}] {task_data.play}: {task_data.name}'
name = f"[{host_data.name}] {task_data.play}: {task_data.name}"
message = "success"
status = "success"
enriched_error_message = None
if host_data.status == 'included':
if host_data.status == "included":
rc = 0
else:
res = host_data.result._result
rc = res.get('rc', 0)
if host_data.status == 'failed':
rc = res.get("rc", 0)
if host_data.status == "failed":
message = self.get_error_message(res)
enriched_error_message = self.enrich_error_message(res)
status = "failure"
elif host_data.status == 'skipped':
if 'skip_reason' in res:
message = res['skip_reason']
elif host_data.status == "skipped":
if "skip_reason" in res:
message = res["skip_reason"]
else:
message = 'skipped'
message = "skipped"
status = "unknown"
with capture_span(task_data.name,
start=task_data.start,
span_type="ansible.task.run",
duration=host_data.finish - task_data.start,
labels={"ansible.task.args": task_data.args,
"ansible.task.message": message,
"ansible.task.module": task_data.action,
"ansible.task.name": name,
"ansible.task.result": rc,
"ansible.task.host.name": host_data.name,
"ansible.task.host.status": host_data.status}) as span:
with capture_span(
task_data.name,
start=task_data.start,
span_type="ansible.task.run",
duration=host_data.finish - task_data.start,
labels={
"ansible.task.args": task_data.args,
"ansible.task.message": message,
"ansible.task.module": task_data.action,
"ansible.task.name": name,
"ansible.task.result": rc,
"ansible.task.host.name": host_data.name,
"ansible.task.host.status": host_data.status,
},
) as span:
span.outcome = status
if 'failure' in status:
exception = AnsibleRuntimeError(message=f"{task_data.action}: {name} failed with error message {enriched_error_message}")
if "failure" in status:
exception = AnsibleRuntimeError(
message=f"{task_data.action}: {name} failed with error message {enriched_error_message}"
)
apm_cli.capture_exception(exc_info=(type(exception), exception, exception.__traceback__), handled=True)
def init_apm_client(self, apm_server_url, apm_service_name, apm_verify_server_cert, apm_secret_token, apm_api_key):
if apm_server_url:
return Client(service_name=apm_service_name,
server_url=apm_server_url,
verify_server_cert=False,
secret_token=apm_secret_token,
api_key=apm_api_key,
use_elastic_traceparent_header=True,
debug=True)
return Client(
service_name=apm_service_name,
server_url=apm_server_url,
verify_server_cert=False,
secret_token=apm_secret_token,
api_key=apm_api_key,
use_elastic_traceparent_header=True,
debug=True,
)
@staticmethod
def get_error_message(result):
if result.get('exception') is not None:
return ElasticSource._last_line(result['exception'])
return result.get('msg', 'failed')
if result.get("exception") is not None:
return ElasticSource._last_line(result["exception"])
return result.get("msg", "failed")
@staticmethod
def _last_line(text):
lines = text.strip().split('\n')
lines = text.strip().split("\n")
return lines[-1]
@staticmethod
def enrich_error_message(result):
message = result.get('msg', 'failed')
exception = result.get('exception')
stderr = result.get('stderr')
return f"message: \"{message}\"\nexception: \"{exception}\"\nstderr: \"{stderr}\""
message = result.get("msg", "failed")
exception = result.get("exception")
stderr = result.get("stderr")
return f'message: "{message}"\nexception: "{exception}"\nstderr: "{stderr}"'
class CallbackModule(CallbackBase):
@@ -291,12 +311,12 @@ class CallbackModule(CallbackBase):
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.elastic'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.elastic"
CALLBACK_NEEDS_ENABLED = True
def __init__(self, display=None):
super(CallbackModule, self).__init__(display=display)
super().__init__(display=display)
self.hide_task_arguments = None
self.apm_service_name = None
self.ansible_playbook = None
@@ -307,28 +327,28 @@ class CallbackModule(CallbackBase):
self.disabled = False
if ELASTIC_LIBRARY_IMPORT_ERROR:
raise AnsibleError('The `elastic-apm` must be installed to use this plugin') from ELASTIC_LIBRARY_IMPORT_ERROR
raise AnsibleError(
"The `elastic-apm` must be installed to use this plugin"
) from ELASTIC_LIBRARY_IMPORT_ERROR
self.tasks_data = OrderedDict()
self.elastic = ElasticSource(display=self._display)
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys,
var_options=var_options,
direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.hide_task_arguments = self.get_option('hide_task_arguments')
self.hide_task_arguments = self.get_option("hide_task_arguments")
self.apm_service_name = self.get_option('apm_service_name')
self.apm_service_name = self.get_option("apm_service_name")
if not self.apm_service_name:
self.apm_service_name = 'ansible'
self.apm_service_name = "ansible"
self.apm_server_url = self.get_option('apm_server_url')
self.apm_secret_token = self.get_option('apm_secret_token')
self.apm_api_key = self.get_option('apm_api_key')
self.apm_verify_server_cert = self.get_option('apm_verify_server_cert')
self.traceparent = self.get_option('traceparent')
self.apm_server_url = self.get_option("apm_server_url")
self.apm_secret_token = self.get_option("apm_secret_token")
self.apm_api_key = self.get_option("apm_api_key")
self.apm_verify_server_cert = self.get_option("apm_verify_server_cert")
self.traceparent = self.get_option("traceparent")
def v2_playbook_on_start(self, playbook):
self.ansible_playbook = basename(playbook._file_name)
@@ -337,65 +357,29 @@ class CallbackModule(CallbackBase):
self.play_name = play.get_name()
def v2_runner_on_no_hosts(self, task):
self.elastic.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
self.elastic.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
def v2_playbook_on_task_start(self, task, is_conditional):
self.elastic.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
self.elastic.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
def v2_playbook_on_cleanup_task_start(self, task):
self.elastic.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
self.elastic.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
def v2_playbook_on_handler_task_start(self, task):
self.elastic.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
self.elastic.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
def v2_runner_on_failed(self, result, ignore_errors=False):
self.errors += 1
self.elastic.finish_task(
self.tasks_data,
'failed',
result
)
self.elastic.finish_task(self.tasks_data, "failed", result)
def v2_runner_on_ok(self, result):
self.elastic.finish_task(
self.tasks_data,
'ok',
result
)
self.elastic.finish_task(self.tasks_data, "ok", result)
def v2_runner_on_skipped(self, result):
self.elastic.finish_task(
self.tasks_data,
'skipped',
result
)
self.elastic.finish_task(self.tasks_data, "skipped", result)
def v2_playbook_on_include(self, included_file):
self.elastic.finish_task(
self.tasks_data,
'included',
included_file
)
self.elastic.finish_task(self.tasks_data, "included", included_file)
def v2_playbook_on_stats(self, stats):
if self.errors == 0:
@@ -411,7 +395,7 @@ class CallbackModule(CallbackBase):
self.apm_server_url,
self.apm_verify_server_cert,
self.apm_secret_token,
self.apm_api_key
self.apm_api_key,
)
def v2_runner_on_async_failed(self, result, **kwargs):

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (C) 2016 maxn nikolaev.makc@gmail.com
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -55,29 +54,31 @@ from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.jabber'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.jabber"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super(CallbackModule, self).__init__(display=display)
super().__init__(display=display)
if not HAS_XMPP:
self._display.warning("The required python xmpp library (xmpppy) is not installed. "
"pip install git+https://github.com/ArchipelProject/xmpppy")
self._display.warning(
"The required python xmpp library (xmpppy) is not installed. "
"pip install git+https://github.com/ArchipelProject/xmpppy"
)
self.disabled = True
self.serv = os.getenv('JABBER_SERV')
self.j_user = os.getenv('JABBER_USER')
self.j_pass = os.getenv('JABBER_PASS')
self.j_to = os.getenv('JABBER_TO')
self.serv = os.getenv("JABBER_SERV")
self.j_user = os.getenv("JABBER_USER")
self.j_pass = os.getenv("JABBER_PASS")
self.j_to = os.getenv("JABBER_TO")
if (self.j_user or self.j_pass or self.serv or self.j_to) is None:
self.disabled = True
self._display.warning('Jabber CallBack wants the JABBER_SERV, JABBER_USER, JABBER_PASS and JABBER_TO environment variables')
self._display.warning(
"Jabber CallBack wants the JABBER_SERV, JABBER_USER, JABBER_PASS and JABBER_TO environment variables"
)
def send_msg(self, msg):
"""Send message"""
@@ -86,7 +87,7 @@ class CallbackModule(CallbackBase):
client.connect(server=(self.serv, 5222))
client.auth(jid.getNode(), self.j_pass, resource=jid.getResource())
message = xmpp.Message(self.j_to, msg)
message.setAttr('type', 'chat')
message.setAttr("type", "chat")
client.send(message)
client.disconnect()
@@ -110,9 +111,9 @@ class CallbackModule(CallbackBase):
unreachable = False
for h in hosts:
s = stats.summarize(h)
if s['failures'] > 0:
if s["failures"] > 0:
failures = True
if s['unreachable'] > 0:
if s["unreachable"] > 0:
unreachable = True
if failures or unreachable:

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -28,16 +27,15 @@ options:
key: log_folder
"""
import json
import os
import time
import json
from ansible.utils.path import makedirs_safe
from ansible.module_utils.common.text.converters import to_bytes
from collections.abc import MutableMapping
from ansible.module_utils.common.text.converters import to_bytes
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.plugins.callback import CallbackBase
from ansible.utils.path import makedirs_safe
# NOTE: in Ansible 1.2 or later general logging is available without
# this plugin, just set ANSIBLE_LOG_PATH as an environment variable
@@ -50,9 +48,10 @@ class CallbackModule(CallbackBase):
"""
logs playbook results, per host, in /var/log/ansible/hosts
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.log_plays'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.log_plays"
CALLBACK_NEEDS_WHITELIST = True
TIME_FORMAT = "%b %d %Y %H:%M:%S"
@@ -62,11 +61,10 @@ class CallbackModule(CallbackBase):
return f"{now} - {playbook} - {task_name} - {task_action} - {category} - {data}\n\n"
def __init__(self):
super(CallbackModule, self).__init__()
super().__init__()
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.log_folder = self.get_option("log_folder")
@@ -76,12 +74,12 @@ class CallbackModule(CallbackBase):
def log(self, result, category):
data = result._result
if isinstance(data, MutableMapping):
if '_ansible_verbose_override' in data:
if "_ansible_verbose_override" in data:
# avoid logging extraneous data
data = 'omitted'
data = "omitted"
else:
data = data.copy()
invocation = data.pop('invocation', None)
invocation = data.pop("invocation", None)
data = json.dumps(data, cls=AnsibleJSONEncoder)
if invocation is not None:
data = f"{json.dumps(invocation)} => {data} "
@@ -94,25 +92,25 @@ class CallbackModule(CallbackBase):
fd.write(msg)
def v2_runner_on_failed(self, result, ignore_errors=False):
self.log(result, 'FAILED')
self.log(result, "FAILED")
def v2_runner_on_ok(self, result):
self.log(result, 'OK')
self.log(result, "OK")
def v2_runner_on_skipped(self, result):
self.log(result, 'SKIPPED')
self.log(result, "SKIPPED")
def v2_runner_on_unreachable(self, result):
self.log(result, 'UNREACHABLE')
self.log(result, "UNREACHABLE")
def v2_runner_on_async_failed(self, result):
self.log(result, 'ASYNC_FAILED')
self.log(result, "ASYNC_FAILED")
def v2_playbook_on_start(self, playbook):
self.playbook = playbook._file_name
def v2_playbook_on_import_for_host(self, result, imported_file):
self.log(result, 'IMPORTED', imported_file)
self.log(result, "IMPORTED", imported_file)
def v2_playbook_on_not_import_for_host(self, result, missing_file):
self.log(result, 'NOTIMPORTED', missing_file)
self.log(result, "NOTIMPORTED", missing_file)

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) Ansible project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -52,14 +51,13 @@ examples: |-
shared_key = dZD0kCbKl3ehZG6LHFMuhtE0yHiFCmetzFMc2u+roXIUQuatqU924SsAAAAPemhjbGlAemhjbGktTUJQAQIDBA==
"""
import base64
import getpass
import hashlib
import hmac
import base64
import json
import uuid
import socket
import getpass
import uuid
from os.path import basename
from ansible.module_utils.ansible_release import __version__ as ansible_version
@@ -72,7 +70,7 @@ from ansible_collections.community.general.plugins.module_utils.datetime import
)
class AzureLogAnalyticsSource(object):
class AzureLogAnalyticsSource:
def __init__(self):
self.ansible_check_mode = False
self.ansible_playbook = ""
@@ -84,11 +82,10 @@ class AzureLogAnalyticsSource(object):
def __build_signature(self, date, workspace_id, shared_key, content_length):
# Build authorisation signature for Azure log analytics API call
sigs = f"POST\n{content_length}\napplication/json\nx-ms-date:{date}\n/api/logs"
utf8_sigs = sigs.encode('utf-8')
utf8_sigs = sigs.encode("utf-8")
decoded_shared_key = base64.b64decode(shared_key)
hmac_sha256_sigs = hmac.new(
decoded_shared_key, utf8_sigs, digestmod=hashlib.sha256).digest()
encoded_hash = base64.b64encode(hmac_sha256_sigs).decode('utf-8')
hmac_sha256_sigs = hmac.new(decoded_shared_key, utf8_sigs, digestmod=hashlib.sha256).digest()
encoded_hash = base64.b64encode(hmac_sha256_sigs).decode("utf-8")
signature = f"SharedKey {workspace_id}:{encoded_hash}"
return signature
@@ -96,10 +93,10 @@ class AzureLogAnalyticsSource(object):
return f"https://{workspace_id}.ods.opinsights.azure.com/api/logs?api-version=2016-04-01"
def __rfc1123date(self):
return now().strftime('%a, %d %b %Y %H:%M:%S GMT')
return now().strftime("%a, %d %b %Y %H:%M:%S GMT")
def send_event(self, workspace_id, shared_key, state, result, runtime):
if result._task_fields['args'].get('_ansible_check_mode') is True:
if result._task_fields["args"].get("_ansible_check_mode") is True:
self.ansible_check_mode = True
if result._task._role:
@@ -108,31 +105,31 @@ class AzureLogAnalyticsSource(object):
ansible_role = None
data = {}
data['uuid'] = result._task._uuid
data['session'] = self.session
data['status'] = state
data['timestamp'] = self.__rfc1123date()
data['host'] = self.host
data['user'] = self.user
data['runtime'] = runtime
data['ansible_version'] = ansible_version
data['ansible_check_mode'] = self.ansible_check_mode
data['ansible_host'] = result._host.name
data['ansible_playbook'] = self.ansible_playbook
data['ansible_role'] = ansible_role
data['ansible_task'] = result._task_fields
data["uuid"] = result._task._uuid
data["session"] = self.session
data["status"] = state
data["timestamp"] = self.__rfc1123date()
data["host"] = self.host
data["user"] = self.user
data["runtime"] = runtime
data["ansible_version"] = ansible_version
data["ansible_check_mode"] = self.ansible_check_mode
data["ansible_host"] = result._host.name
data["ansible_playbook"] = self.ansible_playbook
data["ansible_role"] = ansible_role
data["ansible_task"] = result._task_fields
# Removing args since it can contain sensitive data
if 'args' in data['ansible_task']:
data['ansible_task'].pop('args')
data['ansible_result'] = result._result
if 'content' in data['ansible_result']:
data['ansible_result'].pop('content')
if "args" in data["ansible_task"]:
data["ansible_task"].pop("args")
data["ansible_result"] = result._result
if "content" in data["ansible_result"]:
data["ansible_result"].pop("content")
# Adding extra vars info
data['extra_vars'] = self.extra_vars
data["extra_vars"] = self.extra_vars
# Preparing the playbook logs as JSON format and send to Azure log analytics
jsondata = json.dumps({'event': data}, cls=AnsibleJSONEncoder, sort_keys=True)
jsondata = json.dumps({"event": data}, cls=AnsibleJSONEncoder, sort_keys=True)
content_length = len(jsondata)
rfc1123date = self.__rfc1123date()
signature = self.__build_signature(rfc1123date, workspace_id, shared_key, content_length)
@@ -142,38 +139,35 @@ class AzureLogAnalyticsSource(object):
workspace_url,
jsondata,
headers={
'content-type': 'application/json',
'Authorization': signature,
'Log-Type': 'ansible_playbook',
'x-ms-date': rfc1123date
"content-type": "application/json",
"Authorization": signature,
"Log-Type": "ansible_playbook",
"x-ms-date": rfc1123date,
},
method='POST'
method="POST",
)
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'loganalytics'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "loganalytics"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super(CallbackModule, self).__init__(display=display)
super().__init__(display=display)
self.start_datetimes = {} # Collect task start times
self.workspace_id = None
self.shared_key = None
self.loganalytics = AzureLogAnalyticsSource()
def _seconds_since_start(self, result):
return (
now() -
self.start_datetimes[result._task._uuid]
).total_seconds()
return (now() - self.start_datetimes[result._task._uuid]).total_seconds()
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.workspace_id = self.get_option('workspace_id')
self.shared_key = self.get_option('shared_key')
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.workspace_id = self.get_option("workspace_id")
self.shared_key = self.get_option("shared_key")
def v2_playbook_on_play_start(self, play):
vm = play.get_variable_manager()
@@ -191,45 +185,25 @@ class CallbackModule(CallbackBase):
def v2_runner_on_ok(self, result, **kwargs):
self.loganalytics.send_event(
self.workspace_id,
self.shared_key,
'OK',
result,
self._seconds_since_start(result)
self.workspace_id, self.shared_key, "OK", result, self._seconds_since_start(result)
)
def v2_runner_on_skipped(self, result, **kwargs):
self.loganalytics.send_event(
self.workspace_id,
self.shared_key,
'SKIPPED',
result,
self._seconds_since_start(result)
self.workspace_id, self.shared_key, "SKIPPED", result, self._seconds_since_start(result)
)
def v2_runner_on_failed(self, result, **kwargs):
self.loganalytics.send_event(
self.workspace_id,
self.shared_key,
'FAILED',
result,
self._seconds_since_start(result)
self.workspace_id, self.shared_key, "FAILED", result, self._seconds_since_start(result)
)
def runner_on_async_failed(self, result, **kwargs):
self.loganalytics.send_event(
self.workspace_id,
self.shared_key,
'FAILED',
result,
self._seconds_since_start(result)
self.workspace_id, self.shared_key, "FAILED", result, self._seconds_since_start(result)
)
def v2_runner_on_unreachable(self, result, **kwargs):
self.loganalytics.send_event(
self.workspace_id,
self.shared_key,
'UNREACHABLE',
result,
self._seconds_since_start(result)
self.workspace_id, self.shared_key, "UNREACHABLE", result, self._seconds_since_start(result)
)

View File

@@ -0,0 +1,342 @@
#!/usr/bin/env python
# Copyright (c) Ansible project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import annotations
DOCUMENTATION = """
name: loganalytics_ingestion
type: notification
short_description: Posts task results to an Azure Log Analytics workspace using the new Logs Ingestion API
author:
- Wade Cline (@wtcline-intc) <wade.cline@intel.com>
- Sriramoju Vishal Bharath (@vsh47) <sriramoju.vishal.bharath@intel.com>
- Cyrus Li (@zhcli) <cyrus1006@gmail.com>
description:
- This callback plugin will post task results in JSON format to an Azure Log Analytics workspace using the new Logs Ingestion API.
version_added: "12.4.0"
requirements:
- The callback plugin has been enabled.
- An Azure Log Analytics workspace has been established.
- A Data Collection Rule (DCR) and custom table are created.
options:
dce_url:
description: URL of the Data Collection Endpoint (DCE) for Azure Logs Ingestion API.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_DCE_URL
ini:
- section: callback_loganalytics
key: dce_url
dcr_id:
description: Data Collection Rule (DCR) ID for the Azure Log Ingestion API.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_DCR_ID
ini:
- section: callback_loganalytics
key: dcr_id
disable_attempts:
description:
- When O(disable_on_failure=true), number of plugin failures that must occur before the plugin is disabled.
- This helps prevent outright plugin failure from a single, transient network issue.
type: int
default: 3
env:
- name: ANSIBLE_LOGANALYTICS_DISABLE_ATTEMPTS
ini:
- section: callback_loganalytics
key: disable_attempts
disable_on_failure:
description: Stop trying to send data on plugin failure.
type: bool
default: true
env:
- name: ANSIBLE_LOGANALYTICS_DISABLE_ON_FAILURE
ini:
- section: callback_loganalytics
key: disable_on_failure
client_id:
description: Client ID of the Azure App registration for OAuth2 authentication ("Modern Authentication").
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_CLIENT_ID
ini:
- section: callback_loganalytics
key: client_id
client_secret:
description: Client Secret of the Azure App registration.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_CLIENT_SECRET
ini:
- section: callback_loganalytics
key: client_secret
include_content:
description: Send the content to the Azure Log Analytics workspace.
type: bool
default: false
env:
- name: ANSIBLE_LOGANALYTICS_INCLUDE_CONTENT
ini:
- section: callback_loganalytics
key: include_content
include_task_args:
description: Send the task args to the Azure Log Analytics workspace.
type: bool
default: false
env:
- name: ANSIBLE_LOGANALYTICS_INCLUDE_TASK_ARGS
ini:
- section: callback_loganalytics
key: include_task_args
stream_name:
description: The name of the stream used to send the logs to the Azure Log Analytics workspace.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_STREAM_NAME
ini:
- section: callback_loganalytics
key: stream_name
tenant_id:
description: Tenant ID for the Azure Active Directory.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_TENANT_ID
ini:
- section: callback_loganalytics
key: tenant_id
timeout:
description: Timeout for the HTTP requests to the Azure Log Analytics API.
type: int
default: 2
env:
- name: ANSIBLE_LOGANALYTICS_TIMEOUT
ini:
- section: callback_loganalytics
key: timeout
seealso:
- name: Logs Ingestion API
description: Overview of Logs Ingestion API in Azure Monitor
link: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview
notes:
- Triple verbosity logging (C(-vvv)) can be used to generate JSON sample data for creating the table schema in Azure Log Analytics.
Search for the string C(Event Data:) in the output in order to locate the data sample.
"""
EXAMPLES = """
examples: |
Enable the plugin in ansible.cfg:
[defaults]
callback_enabled = community.general.loganalytics_ingestion
Set the environment variables:
export ANSIBLE_LOGANALYTICS_DCE_URL=https://my-dce.ingest.monitor.azure.com
export ANSIBLE_LOGANALYTICS_DCR_ID=dcr-xxxxxx
export ANSIBLE_LOGANALYTICS_CLIENT_ID=xxxxxxxx
export ANSIBLE_LOGANALYTICS_CLIENT_SECRET=xxxxxxxx
export ANSIBLE_LOGANALYTICS_TENANT_ID=xxxxxxxx
export ANSIBLE_LOGANALYTICS_STREAM_NAME=Custom-MyTable
"""
import getpass
import json
import socket
import uuid
from datetime import datetime, timedelta, timezone
from os.path import basename
from urllib.parse import urlencode
from ansible.module_utils.urls import open_url
from ansible.plugins.callback import CallbackBase
from ansible.utils.display import Display
display = Display()
class AzureLogAnalyticsIngestionSource:
def __init__(
self,
dce_url,
dcr_id,
disable_attempts,
disable_on_failure,
client_id,
client_secret,
tenant_id,
stream_name,
include_task_args,
include_content,
timeout,
fqcn,
):
self.dce_url = dce_url
self.dcr_id = dcr_id
self.disabled = False
self.disable_attempts = disable_attempts
self.disable_on_failure = disable_on_failure
self.client_id = client_id
self.client_secret = client_secret
self.failures = 0
self.tenant_id = tenant_id
self.stream_name = stream_name
self.include_task_args = include_task_args
self.include_content = include_content
self.token_expiration_time = None
self.session = str(uuid.uuid4())
self.host = socket.gethostname()
self.user = getpass.getuser()
self.timeout = timeout
self.fqcn = fqcn
self.bearer_token = self.get_bearer_token()
# OAuth2 authentication method to get a Bearer token
# This replaces the shared_key authentication mechanism
def get_bearer_token(self):
url = f"https://login.microsoftonline.com/{self.tenant_id}/oauth2/v2.0/token"
headers = {"Content-Type": "application/x-www-form-urlencoded"}
data = urlencode(
{
"grant_type": "client_credentials",
"client_id": self.client_id,
"client_secret": self.client_secret,
# The scope value comes from https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview#headers
# and https://learn.microsoft.com/en-us/entra/identity-platform/scopes-oidc#the-default-scope
"scope": "https://monitor.azure.com/.default",
}
)
response = open_url(url, data=data, force=True, headers=headers, method="POST", timeout=self.timeout)
j = json.loads(response.read().decode("utf-8"))
self.token_expiration_time = datetime.now() + timedelta(seconds=j.get("expires_in"))
return j.get("access_token")
def is_token_valid(self):
return datetime.now() + timedelta(seconds=10) < self.token_expiration_time
# Method to send event data to the Azure Logs Ingestion API
# This replaces the legacy API call and now uses the Logs Ingestion API endpoint
def send_event(self, event_data):
if not self.is_token_valid():
self.bearer_token = self.get_bearer_token()
ingestion_url = (
f"{self.dce_url}/dataCollectionRules/{self.dcr_id}/streams/{self.stream_name}?api-version=2023-01-01"
)
headers = {"Authorization": f"Bearer {self.bearer_token}", "Content-Type": "application/json"}
open_url(ingestion_url, data=json.dumps(event_data), headers=headers, method="POST", timeout=self.timeout)
def _rfc1123date(self):
return datetime.now(timezone.utc).strftime("%a, %d %b %Y %H:%M:%S GMT")
# This method wraps the private method with the appropriate error handling.
def send_to_loganalytics(self, playbook_name, result, state):
if self.disabled:
return
try:
self._send_to_loganalytics(playbook_name, result, state)
except Exception as e:
display.warning(f"{self.fqcn} callback plugin failure: {e}.")
if self.disable_on_failure:
self.failures += 1
if self.failures >= self.disable_attempts:
display.warning(
f"{self.fqcn} callback plugin failures exceed maximum of '{self.disable_attempts}'! Disabling plugin!"
)
self.disabled = True
else:
display.v(f"{self.fqcn} callback plugin failure {self.failures}/{self.disable_attempts}")
def _send_to_loganalytics(self, playbook_name, result, state):
ansible_role = str(result._task._role) if result._task._role else None
# Include/Exclude task args
if not self.include_task_args:
result._task_fields.pop("args", None)
# Include/Exclude content
if not self.include_content:
result._result.pop("content", None)
# Build the event data
event_data = [
{
"TimeGenerated": self._rfc1123date(),
"Host": result._host.name,
"User": self.user,
"Playbook": playbook_name,
"Role": ansible_role,
"TaskName": result._task.get_name(),
"Task": result._task_fields,
"Action": result._task_fields["action"],
"State": state,
"Result": result._result,
"Session": self.session,
}
]
# The data displayed here can be used as a sample file in order to create the table's schema.
display.vvv(f"Event Data: {json.dumps(event_data)}")
self.send_event(event_data)
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "loganalytics_ingestion"
CALLBACK_NEEDS_ENABLED = True
def __init__(self, display=None):
super().__init__(display=display)
self.start_datetimes = {}
self.playbook_name = None
self.azure_loganalytics = None
self.fqcn = f"community.general.{self.CALLBACK_NAME}"
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
# Set options for the new Azure Logs Ingestion API configuration
self.client_id = self.get_option("client_id")
self.client_secret = self.get_option("client_secret")
self.dce_url = self.get_option("dce_url")
self.dcr_id = self.get_option("dcr_id")
self.disable_attempts = self.get_option("disable_attempts")
self.disable_on_failure = self.get_option("disable_on_failure")
self.include_content = self.get_option("include_content")
self.include_task_args = self.get_option("include_task_args")
self.stream_name = self.get_option("stream_name")
self.tenant_id = self.get_option("tenant_id")
self.timeout = self.get_option("timeout")
# Initialize the AzureLogAnalyticsIngestionSource with the new settings
self.azure_loganalytics = AzureLogAnalyticsIngestionSource(
self.dce_url,
self.dcr_id,
self.disable_attempts,
self.disable_on_failure,
self.client_id,
self.client_secret,
self.tenant_id,
self.stream_name,
self.include_task_args,
self.include_content,
self.timeout,
self.fqcn,
)
def v2_playbook_on_start(self, playbook):
self.playbook_name = basename(playbook._file_name)
# Build event data and send it to the Logs Ingestion API
def v2_runner_on_failed(self, result, **kwargs):
self.azure_loganalytics.send_to_loganalytics(self.playbook_name, result, "FAILED")
def v2_runner_on_ok(self, result, **kwargs):
self.azure_loganalytics.send_to_loganalytics(self.playbook_name, result, "OK")

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018, Samir Musali <samir.musali@logdna.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -56,15 +55,17 @@ options:
default: ansible
"""
import logging
import json
import logging
import socket
from uuid import getnode
from ansible.plugins.callback import CallbackBase
from ansible.parsing.ajson import AnsibleJSONEncoder
from ansible.plugins.callback import CallbackBase
try:
from logdna import LogDNAHandler
HAS_LOGDNA = True
except ImportError:
HAS_LOGDNA = False
@@ -73,12 +74,12 @@ except ImportError:
# Getting MAC Address of system:
def get_mac():
mac = f"{getnode():012x}"
return ":".join(map(lambda index: mac[index:index + 2], range(int(len(mac) / 2))))
return ":".join(map(lambda index: mac[index : index + 2], range(int(len(mac) / 2))))
# Getting hostname of system:
def get_hostname():
return str(socket.gethostname()).split('.local', 1)[0]
return str(socket.gethostname()).split(".local", 1)[0]
# Getting IP of system:
@@ -88,10 +89,10 @@ def get_ip():
except Exception:
s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
try:
s.connect(('10.255.255.255', 1))
s.connect(("10.255.255.255", 1))
IP = s.getsockname()[0]
except Exception:
IP = '127.0.0.1'
IP = "127.0.0.1"
finally:
s.close()
return IP
@@ -108,14 +109,13 @@ def isJSONable(obj):
# LogDNA Callback Module:
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 0.1
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.logdna'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.logdna"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super(CallbackModule, self).__init__(display=display)
super().__init__(display=display)
self.disabled = True
self.playbook_name = None
@@ -126,29 +126,29 @@ class CallbackModule(CallbackBase):
self.conf_tags = None
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.conf_key = self.get_option('conf_key')
self.plugin_ignore_errors = self.get_option('plugin_ignore_errors')
self.conf_hostname = self.get_option('conf_hostname')
self.conf_tags = self.get_option('conf_tags')
self.conf_key = self.get_option("conf_key")
self.plugin_ignore_errors = self.get_option("plugin_ignore_errors")
self.conf_hostname = self.get_option("conf_hostname")
self.conf_tags = self.get_option("conf_tags")
self.mac = get_mac()
self.ip = get_ip()
if self.conf_hostname is None:
self.conf_hostname = get_hostname()
self.conf_tags = self.conf_tags.split(',')
self.conf_tags = self.conf_tags.split(",")
if HAS_LOGDNA:
self.log = logging.getLogger('logdna')
self.log = logging.getLogger("logdna")
self.log.setLevel(logging.INFO)
self.options = {'hostname': self.conf_hostname, 'mac': self.mac, 'index_meta': True}
self.options = {"hostname": self.conf_hostname, "mac": self.mac, "index_meta": True}
self.log.addHandler(LogDNAHandler(self.conf_key, self.options))
self.disabled = False
else:
self.disabled = True
self._display.warning('WARNING:\nPlease, install LogDNA Python Package: `pip install logdna`')
self._display.warning("WARNING:\nPlease, install LogDNA Python Package: `pip install logdna`")
def metaIndexing(self, meta):
invalidKeys = []
@@ -160,25 +160,25 @@ class CallbackModule(CallbackBase):
if ninvalidKeys > 0:
for key in invalidKeys:
del meta[key]
meta['__errors'] = f"These keys have been sanitized: {', '.join(invalidKeys)}"
meta["__errors"] = f"These keys have been sanitized: {', '.join(invalidKeys)}"
return meta
def sanitizeJSON(self, data):
try:
return json.loads(json.dumps(data, sort_keys=True, cls=AnsibleJSONEncoder))
except Exception:
return {'warnings': ['JSON Formatting Issue', json.dumps(data, sort_keys=True, cls=AnsibleJSONEncoder)]}
return {"warnings": ["JSON Formatting Issue", json.dumps(data, sort_keys=True, cls=AnsibleJSONEncoder)]}
def flush(self, log, options):
if HAS_LOGDNA:
self.log.info(json.dumps(log), options)
def sendLog(self, host, category, logdata):
options = {'app': 'ansible', 'meta': {'playbook': self.playbook_name, 'host': host, 'category': category}}
logdata['info'].pop('invocation', None)
warnings = logdata['info'].pop('warnings', None)
options = {"app": "ansible", "meta": {"playbook": self.playbook_name, "host": host, "category": category}}
logdata["info"].pop("invocation", None)
warnings = logdata["info"].pop("warnings", None)
if warnings is not None:
self.flush({'warn': warnings}, options)
self.flush({"warn": warnings}, options)
self.flush(logdata, options)
def v2_playbook_on_start(self, playbook):
@@ -189,21 +189,21 @@ class CallbackModule(CallbackBase):
result = dict()
for host in stats.processed.keys():
result[host] = stats.summarize(host)
self.sendLog(self.conf_hostname, 'STATS', {'info': self.sanitizeJSON(result)})
self.sendLog(self.conf_hostname, "STATS", {"info": self.sanitizeJSON(result)})
def runner_on_failed(self, host, res, ignore_errors=False):
if self.plugin_ignore_errors:
ignore_errors = self.plugin_ignore_errors
self.sendLog(host, 'FAILED', {'info': self.sanitizeJSON(res), 'ignore_errors': ignore_errors})
self.sendLog(host, "FAILED", {"info": self.sanitizeJSON(res), "ignore_errors": ignore_errors})
def runner_on_ok(self, host, res):
self.sendLog(host, 'OK', {'info': self.sanitizeJSON(res)})
self.sendLog(host, "OK", {"info": self.sanitizeJSON(res)})
def runner_on_unreachable(self, host, res):
self.sendLog(host, 'UNREACHABLE', {'info': self.sanitizeJSON(res)})
self.sendLog(host, "UNREACHABLE", {"info": self.sanitizeJSON(res)})
def runner_on_async_failed(self, host, res, jid):
self.sendLog(host, 'ASYNC_FAILED', {'info': self.sanitizeJSON(res), 'job_id': jid})
self.sendLog(host, "ASYNC_FAILED", {"info": self.sanitizeJSON(res), "job_id": jid})
def runner_on_async_ok(self, host, res, jid):
self.sendLog(host, 'ASYNC_OK', {'info': self.sanitizeJSON(res), 'job_id': jid})
self.sendLog(host, "ASYNC_OK", {"info": self.sanitizeJSON(res), "job_id": jid})

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2015, Logentries.com, Jimmy Tang <jimmy.tang@logentries.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -97,19 +96,21 @@ examples: >-
"""
import os
import socket
import random
import socket
import time
import uuid
try:
import certifi
HAS_CERTIFI = True
except ImportError:
HAS_CERTIFI = False
try:
import flatdict
HAS_FLATDICT = True
except ImportError:
HAS_FLATDICT = False
@@ -121,9 +122,8 @@ from ansible.plugins.callback import CallbackBase
# * Better formatting of output before sending out to logentries data/api nodes.
class PlainTextSocketAppender(object):
def __init__(self, display, LE_API='data.logentries.com', LE_PORT=80, LE_TLS_PORT=443):
class PlainTextSocketAppender:
def __init__(self, display, LE_API="data.logentries.com", LE_PORT=80, LE_TLS_PORT=443):
self.LE_API = LE_API
self.LE_PORT = LE_PORT
self.LE_TLS_PORT = LE_TLS_PORT
@@ -132,7 +132,7 @@ class PlainTextSocketAppender(object):
# Error message displayed when an incorrect Token has been detected
self.INVALID_TOKEN = "\n\nIt appears the LOGENTRIES_TOKEN parameter you entered is incorrect!\n\n"
# Unicode Line separator character \u2028
self.LINE_SEP = '\u2028'
self.LINE_SEP = "\u2028"
self._display = display
self._conn = None
@@ -171,14 +171,14 @@ class PlainTextSocketAppender(object):
def put(self, data):
# Replace newlines with Unicode line separator
# for multi-line events
data = to_text(data, errors='surrogate_or_strict')
multiline = data.replace('\n', self.LINE_SEP)
data = to_text(data, errors="surrogate_or_strict")
multiline = data.replace("\n", self.LINE_SEP)
multiline += "\n"
# Send data, reconnect if needed
while True:
try:
self._conn.send(to_bytes(multiline, errors='surrogate_or_strict'))
except socket.error:
self._conn.send(to_bytes(multiline, errors="surrogate_or_strict"))
except OSError:
self.reopen_connection()
continue
break
@@ -188,6 +188,7 @@ class PlainTextSocketAppender(object):
try:
import ssl
HAS_SSL = True
except ImportError: # for systems without TLS support.
SocketAppender = PlainTextSocketAppender
@@ -199,27 +200,28 @@ else:
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
context = ssl.create_default_context(
purpose=ssl.Purpose.SERVER_AUTH,
cafile=certifi.where(), )
cafile=certifi.where(),
)
sock = context.wrap_socket(
sock=sock,
do_handshake_on_connect=True,
suppress_ragged_eofs=True, )
suppress_ragged_eofs=True,
)
sock.connect((self.LE_API, self.LE_TLS_PORT))
self._conn = sock
SocketAppender = TLSSocketAppender
SocketAppender = TLSSocketAppender # type: ignore
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.logentries'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.logentries"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
# TODO: allow for alternate posting methods (REST/UDP/agent/etc)
super(CallbackModule, self).__init__()
super().__init__()
# verify dependencies
if not HAS_SSL:
@@ -227,7 +229,9 @@ class CallbackModule(CallbackBase):
if not HAS_CERTIFI:
self.disabled = True
self._display.warning('The `certifi` python module is not installed.\nDisabling the Logentries callback plugin.')
self._display.warning(
"The `certifi` python module is not installed.\nDisabling the Logentries callback plugin."
)
self.le_jobid = str(uuid.uuid4())
@@ -235,41 +239,47 @@ class CallbackModule(CallbackBase):
self.timeout = 10
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
# get options
try:
self.api_url = self.get_option('api')
self.api_port = self.get_option('port')
self.api_tls_port = self.get_option('tls_port')
self.use_tls = self.get_option('use_tls')
self.flatten = self.get_option('flatten')
self.api_url = self.get_option("api")
self.api_port = self.get_option("port")
self.api_tls_port = self.get_option("tls_port")
self.use_tls = self.get_option("use_tls")
self.flatten = self.get_option("flatten")
except KeyError as e:
self._display.warning(f"Missing option for Logentries callback plugin: {e}")
self.disabled = True
try:
self.token = self.get_option('token')
except KeyError as e:
self._display.warning('Logentries token was not provided, this is required for this callback to operate, disabling')
self.token = self.get_option("token")
except KeyError:
self._display.warning(
"Logentries token was not provided, this is required for this callback to operate, disabling"
)
self.disabled = True
if self.flatten and not HAS_FLATDICT:
self.disabled = True
self._display.warning('You have chosen to flatten and the `flatdict` python module is not installed.\nDisabling the Logentries callback plugin.')
self._display.warning(
"You have chosen to flatten and the `flatdict` python module is not installed.\nDisabling the Logentries callback plugin."
)
self._initialize_connections()
def _initialize_connections(self):
if not self.disabled:
if self.use_tls:
self._display.vvvv(f"Connecting to {self.api_url}:{self.api_tls_port} with TLS")
self._appender = TLSSocketAppender(display=self._display, LE_API=self.api_url, LE_TLS_PORT=self.api_tls_port)
self._appender = TLSSocketAppender(
display=self._display, LE_API=self.api_url, LE_TLS_PORT=self.api_tls_port
)
else:
self._display.vvvv(f"Connecting to {self.api_url}:{self.api_port}")
self._appender = PlainTextSocketAppender(display=self._display, LE_API=self.api_url, LE_PORT=self.api_port)
self._appender = PlainTextSocketAppender(
display=self._display, LE_API=self.api_url, LE_PORT=self.api_port
)
self._appender.reopen_connection()
def emit_formatted(self, record):
@@ -280,50 +290,50 @@ class CallbackModule(CallbackBase):
self.emit(self._dump_results(record))
def emit(self, record):
msg = record.rstrip('\n')
msg = record.rstrip("\n")
msg = f"{self.token} {msg}"
self._appender.put(msg)
self._display.vvvv("Sent event to logentries")
def _set_info(self, host, res):
return {'le_jobid': self.le_jobid, 'hostname': host, 'results': res}
return {"le_jobid": self.le_jobid, "hostname": host, "results": res}
def runner_on_ok(self, host, res):
results = self._set_info(host, res)
results['status'] = 'OK'
results["status"] = "OK"
self.emit_formatted(results)
def runner_on_failed(self, host, res, ignore_errors=False):
results = self._set_info(host, res)
results['status'] = 'FAILED'
results["status"] = "FAILED"
self.emit_formatted(results)
def runner_on_skipped(self, host, item=None):
results = self._set_info(host, item)
del results['results']
results['status'] = 'SKIPPED'
del results["results"]
results["status"] = "SKIPPED"
self.emit_formatted(results)
def runner_on_unreachable(self, host, res):
results = self._set_info(host, res)
results['status'] = 'UNREACHABLE'
results["status"] = "UNREACHABLE"
self.emit_formatted(results)
def runner_on_async_failed(self, host, res, jid):
results = self._set_info(host, res)
results['jid'] = jid
results['status'] = 'ASYNC_FAILED'
results["jid"] = jid
results["status"] = "ASYNC_FAILED"
self.emit_formatted(results)
def v2_playbook_on_play_start(self, play):
results = {}
results['le_jobid'] = self.le_jobid
results['started_by'] = os.getlogin()
results["le_jobid"] = self.le_jobid
results["started_by"] = os.getlogin()
if play.name:
results['play'] = play.name
results['hosts'] = play.hosts
results["play"] = play.name
results["hosts"] = play.hosts
self.emit_formatted(results)
def playbook_on_stats(self, stats):
""" close connection """
"""close connection"""
self._appender.close_connection()

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2020, Yevhen Khmelenko <ujenmr@gmail.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -95,15 +94,17 @@ ansible.cfg: |
}
"""
import os
import json
from ansible import context
import logging
import os
import socket
import uuid
import logging
from ansible import context
try:
import logstash
HAS_LOGSTASH = True
except ImportError:
HAS_LOGSTASH = False
@@ -116,14 +117,13 @@ from ansible_collections.community.general.plugins.module_utils.datetime import
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.logstash'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.logstash"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
super(CallbackModule, self).__init__()
super().__init__()
if not HAS_LOGSTASH:
self.disabled = True
@@ -133,14 +133,11 @@ class CallbackModule(CallbackBase):
def _init_plugin(self):
if not self.disabled:
self.logger = logging.getLogger('python-logstash-logger')
self.logger = logging.getLogger("python-logstash-logger")
self.logger.setLevel(logging.DEBUG)
self.handler = logstash.TCPLogstashHandler(
self.ls_server,
self.ls_port,
version=1,
message_type=self.ls_type
self.ls_server, self.ls_port, version=1, message_type=self.ls_type
)
self.logger.addHandler(self.handler)
@@ -148,42 +145,36 @@ class CallbackModule(CallbackBase):
self.session = str(uuid.uuid4())
self.errors = 0
self.base_data = {
'session': self.session,
'host': self.hostname
}
self.base_data = {"session": self.session, "host": self.hostname}
if self.ls_pre_command is not None:
self.base_data['ansible_pre_command_output'] = os.popen(
self.ls_pre_command).read()
self.base_data["ansible_pre_command_output"] = os.popen(self.ls_pre_command).read()
if context.CLIARGS is not None:
self.base_data['ansible_checkmode'] = context.CLIARGS.get('check')
self.base_data['ansible_tags'] = context.CLIARGS.get('tags')
self.base_data['ansible_skip_tags'] = context.CLIARGS.get('skip_tags')
self.base_data['inventory'] = context.CLIARGS.get('inventory')
self.base_data["ansible_checkmode"] = context.CLIARGS.get("check")
self.base_data["ansible_tags"] = context.CLIARGS.get("tags")
self.base_data["ansible_skip_tags"] = context.CLIARGS.get("skip_tags")
self.base_data["inventory"] = context.CLIARGS.get("inventory")
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.ls_server = self.get_option('server')
self.ls_port = int(self.get_option('port'))
self.ls_type = self.get_option('type')
self.ls_pre_command = self.get_option('pre_command')
self.ls_format_version = self.get_option('format_version')
self.ls_server = self.get_option("server")
self.ls_port = int(self.get_option("port"))
self.ls_type = self.get_option("type")
self.ls_pre_command = self.get_option("pre_command")
self.ls_format_version = self.get_option("format_version")
self._init_plugin()
def v2_playbook_on_start(self, playbook):
data = self.base_data.copy()
data['ansible_type'] = "start"
data['status'] = "OK"
data['ansible_playbook'] = playbook._file_name
data["ansible_type"] = "start"
data["status"] = "OK"
data["ansible_playbook"] = playbook._file_name
if self.ls_format_version == "v2":
self.logger.info(
"START PLAYBOOK | %s", data['ansible_playbook'], extra=data
)
self.logger.info("START PLAYBOOK | %s", data["ansible_playbook"], extra=data)
else:
self.logger.info("ansible start", extra=data)
@@ -200,15 +191,13 @@ class CallbackModule(CallbackBase):
status = "FAILED"
data = self.base_data.copy()
data['ansible_type'] = "finish"
data['status'] = status
data['ansible_playbook_duration'] = runtime.total_seconds()
data['ansible_result'] = json.dumps(summarize_stat) # deprecated field
data["ansible_type"] = "finish"
data["status"] = status
data["ansible_playbook_duration"] = runtime.total_seconds()
data["ansible_result"] = json.dumps(summarize_stat) # deprecated field
if self.ls_format_version == "v2":
self.logger.info(
"FINISH PLAYBOOK | %s", json.dumps(summarize_stat), extra=data
)
self.logger.info("FINISH PLAYBOOK | %s", json.dumps(summarize_stat), extra=data)
else:
self.logger.info("ansible stats", extra=data)
@@ -219,10 +208,10 @@ class CallbackModule(CallbackBase):
self.play_name = play.name
data = self.base_data.copy()
data['ansible_type'] = "start"
data['status'] = "OK"
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data["ansible_type"] = "start"
data["status"] = "OK"
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
if self.ls_format_version == "v2":
self.logger.info("START PLAY | %s", self.play_name, extra=data)
@@ -232,64 +221,61 @@ class CallbackModule(CallbackBase):
def v2_playbook_on_task_start(self, task, is_conditional):
self.task_id = str(task._uuid)
'''
"""
Tasks and handler tasks are dealt with here
'''
"""
def v2_runner_on_ok(self, result, **kwargs):
task_name = str(result._task).replace('TASK: ', '').replace('HANDLER: ', '')
task_name = str(result._task).replace("TASK: ", "").replace("HANDLER: ", "")
data = self.base_data.copy()
if task_name == 'setup':
data['ansible_type'] = "setup"
data['status'] = "OK"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_facts'] = self._dump_results(result._result)
if task_name == "setup":
data["ansible_type"] = "setup"
data["status"] = "OK"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_facts"] = self._dump_results(result._result)
if self.ls_format_version == "v2":
self.logger.info(
"SETUP FACTS | %s", self._dump_results(result._result), extra=data
)
self.logger.info("SETUP FACTS | %s", self._dump_results(result._result), extra=data)
else:
self.logger.info("ansible facts", extra=data)
else:
if 'changed' in result._result.keys():
data['ansible_changed'] = result._result['changed']
if "changed" in result._result.keys():
data["ansible_changed"] = result._result["changed"]
else:
data['ansible_changed'] = False
data["ansible_changed"] = False
data['ansible_type'] = "task"
data['status'] = "OK"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_task_id'] = self.task_id
data['ansible_result'] = self._dump_results(result._result)
data["ansible_type"] = "task"
data["status"] = "OK"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_task_id"] = self.task_id
data["ansible_result"] = self._dump_results(result._result)
if self.ls_format_version == "v2":
self.logger.info(
"TASK OK | %s | RESULT | %s",
task_name, self._dump_results(result._result), extra=data
"TASK OK | %s | RESULT | %s", task_name, self._dump_results(result._result), extra=data
)
else:
self.logger.info("ansible ok", extra=data)
def v2_runner_on_skipped(self, result, **kwargs):
task_name = str(result._task).replace('TASK: ', '').replace('HANDLER: ', '')
task_name = str(result._task).replace("TASK: ", "").replace("HANDLER: ", "")
data = self.base_data.copy()
data['ansible_type'] = "task"
data['status'] = "SKIPPED"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_task_id'] = self.task_id
data['ansible_result'] = self._dump_results(result._result)
data["ansible_type"] = "task"
data["status"] = "SKIPPED"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_task_id"] = self.task_id
data["ansible_result"] = self._dump_results(result._result)
if self.ls_format_version == "v2":
self.logger.info("TASK SKIPPED | %s", task_name, extra=data)
@@ -298,12 +284,12 @@ class CallbackModule(CallbackBase):
def v2_playbook_on_import_for_host(self, result, imported_file):
data = self.base_data.copy()
data['ansible_type'] = "import"
data['status'] = "IMPORTED"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['imported_file'] = imported_file
data["ansible_type"] = "import"
data["status"] = "IMPORTED"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["imported_file"] = imported_file
if self.ls_format_version == "v2":
self.logger.info("IMPORT | %s", imported_file, extra=data)
@@ -312,12 +298,12 @@ class CallbackModule(CallbackBase):
def v2_playbook_on_not_import_for_host(self, result, missing_file):
data = self.base_data.copy()
data['ansible_type'] = "import"
data['status'] = "NOT IMPORTED"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['imported_file'] = missing_file
data["ansible_type"] = "import"
data["status"] = "NOT IMPORTED"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["imported_file"] = missing_file
if self.ls_format_version == "v2":
self.logger.info("NOT IMPORTED | %s", missing_file, extra=data)
@@ -325,75 +311,81 @@ class CallbackModule(CallbackBase):
self.logger.info("ansible import", extra=data)
def v2_runner_on_failed(self, result, **kwargs):
task_name = str(result._task).replace('TASK: ', '').replace('HANDLER: ', '')
task_name = str(result._task).replace("TASK: ", "").replace("HANDLER: ", "")
data = self.base_data.copy()
if 'changed' in result._result.keys():
data['ansible_changed'] = result._result['changed']
if "changed" in result._result.keys():
data["ansible_changed"] = result._result["changed"]
else:
data['ansible_changed'] = False
data["ansible_changed"] = False
data['ansible_type'] = "task"
data['status'] = "FAILED"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_task_id'] = self.task_id
data['ansible_result'] = self._dump_results(result._result)
data["ansible_type"] = "task"
data["status"] = "FAILED"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_task_id"] = self.task_id
data["ansible_result"] = self._dump_results(result._result)
self.errors += 1
if self.ls_format_version == "v2":
self.logger.error(
"TASK FAILED | %s | HOST | %s | RESULT | %s",
task_name, self.hostname,
self._dump_results(result._result), extra=data
task_name,
self.hostname,
self._dump_results(result._result),
extra=data,
)
else:
self.logger.error("ansible failed", extra=data)
def v2_runner_on_unreachable(self, result, **kwargs):
task_name = str(result._task).replace('TASK: ', '').replace('HANDLER: ', '')
task_name = str(result._task).replace("TASK: ", "").replace("HANDLER: ", "")
data = self.base_data.copy()
data['ansible_type'] = "task"
data['status'] = "UNREACHABLE"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_task_id'] = self.task_id
data['ansible_result'] = self._dump_results(result._result)
data["ansible_type"] = "task"
data["status"] = "UNREACHABLE"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_task_id"] = self.task_id
data["ansible_result"] = self._dump_results(result._result)
self.errors += 1
if self.ls_format_version == "v2":
self.logger.error(
"UNREACHABLE | %s | HOST | %s | RESULT | %s",
task_name, self.hostname,
self._dump_results(result._result), extra=data
task_name,
self.hostname,
self._dump_results(result._result),
extra=data,
)
else:
self.logger.error("ansible unreachable", extra=data)
def v2_runner_on_async_failed(self, result, **kwargs):
task_name = str(result._task).replace('TASK: ', '').replace('HANDLER: ', '')
task_name = str(result._task).replace("TASK: ", "").replace("HANDLER: ", "")
data = self.base_data.copy()
data['ansible_type'] = "task"
data['status'] = "FAILED"
data['ansible_host'] = result._host.name
data['ansible_play_id'] = self.play_id
data['ansible_play_name'] = self.play_name
data['ansible_task'] = task_name
data['ansible_task_id'] = self.task_id
data['ansible_result'] = self._dump_results(result._result)
data["ansible_type"] = "task"
data["status"] = "FAILED"
data["ansible_host"] = result._host.name
data["ansible_play_id"] = self.play_id
data["ansible_play_name"] = self.play_name
data["ansible_task"] = task_name
data["ansible_task_id"] = self.task_id
data["ansible_result"] = self._dump_results(result._result)
self.errors += 1
if self.ls_format_version == "v2":
self.logger.error(
"ASYNC FAILED | %s | HOST | %s | RESULT | %s",
task_name, self.hostname,
self._dump_results(result._result), extra=data
task_name,
self.hostname,
self._dump_results(result._result),
extra=data,
)
else:
self.logger.error("ansible async", extra=data)

View File

@@ -1,5 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2012, Dag Wieers <dag@wieers.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -81,10 +79,10 @@ options:
version_added: 8.2.0
"""
import email.utils
import json
import os
import re
import email.utils
import smtplib
from ansible.module_utils.common.text.converters import to_bytes
@@ -93,33 +91,33 @@ from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
''' This Ansible callback plugin mails errors to interested parties. '''
"""This Ansible callback plugin mails errors to interested parties."""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.mail'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.mail"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super(CallbackModule, self).__init__(display=display)
super().__init__(display=display)
self.sender = None
self.to = 'root'
self.smtphost = os.getenv('SMTPHOST', 'localhost')
self.to = "root"
self.smtphost = os.getenv("SMTPHOST", "localhost")
self.smtpport = 25
self.cc = None
self.bcc = None
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.sender = self.get_option("sender")
self.to = self.get_option("to")
self.smtphost = self.get_option("mta")
self.smtpport = self.get_option("mtaport")
self.cc = self.get_option("cc")
self.bcc = self.get_option("bcc")
self.sender = self.get_option('sender')
self.to = self.get_option('to')
self.smtphost = self.get_option('mta')
self.smtpport = self.get_option('mtaport')
self.cc = self.get_option('cc')
self.bcc = self.get_option('bcc')
def mail(self, subject='Ansible error mail', body=None):
def mail(self, subject="Ansible error mail", body=None):
if body is None:
body = subject
@@ -133,14 +131,14 @@ class CallbackModule(CallbackBase):
if self.bcc:
bcc_addresses = email.utils.getaddresses(self.bcc)
content = f'Date: {email.utils.formatdate()}\n'
content += f'From: {email.utils.formataddr(sender_address)}\n'
content = f"Date: {email.utils.formatdate()}\n"
content += f"From: {email.utils.formataddr(sender_address)}\n"
if self.to:
content += f"To: {', '.join([email.utils.formataddr(pair) for pair in to_addresses])}\n"
if self.cc:
content += f"Cc: {', '.join([email.utils.formataddr(pair) for pair in cc_addresses])}\n"
content += f"Message-ID: {email.utils.make_msgid(domain=self.get_option('message_id_domain'))}\n"
content += f'Subject: {subject.strip()}\n\n'
content += f"Subject: {subject.strip()}\n\n"
content += body
addresses = to_addresses
@@ -150,23 +148,23 @@ class CallbackModule(CallbackBase):
addresses += bcc_addresses
if not addresses:
self._display.warning('No receiver has been specified for the mail callback plugin.')
self._display.warning("No receiver has been specified for the mail callback plugin.")
smtp.sendmail(self.sender, [address for name, address in addresses], to_bytes(content))
smtp.quit()
def subject_msg(self, multiline, failtype, linenr):
msg = multiline.strip('\r\n').splitlines()[linenr]
return f'{failtype}: {msg}'
msg = multiline.strip("\r\n").splitlines()[linenr]
return f"{failtype}: {msg}"
def indent(self, multiline, indent=8):
return re.sub('^', ' ' * indent, multiline, flags=re.MULTILINE)
return re.sub("^", " " * indent, multiline, flags=re.MULTILINE)
def body_blob(self, multiline, texttype):
''' Turn some text output in a well-indented block for sending in a mail body '''
intro = f'with the following {texttype}:\n\n'
blob = "\n".join(multiline.strip('\r\n').splitlines())
"""Turn some text output in a well-indented block for sending in a mail body"""
intro = f"with the following {texttype}:\n\n"
blob = "\n".join(multiline.strip("\r\n").splitlines())
return f"{intro}{self.indent(blob)}\n"
def mail_result(self, result, failtype):
@@ -177,83 +175,87 @@ class CallbackModule(CallbackBase):
# Add subject
if self.itembody:
subject = self.itemsubject
elif result._result.get('failed_when_result') is True:
elif result._result.get("failed_when_result") is True:
subject = "Failed due to 'failed_when' condition"
elif result._result.get('msg'):
subject = self.subject_msg(result._result['msg'], failtype, 0)
elif result._result.get('stderr'):
subject = self.subject_msg(result._result['stderr'], failtype, -1)
elif result._result.get('stdout'):
subject = self.subject_msg(result._result['stdout'], failtype, -1)
elif result._result.get('exception'): # Unrelated exceptions are added to output :-/
subject = self.subject_msg(result._result['exception'], failtype, -1)
elif result._result.get("msg"):
subject = self.subject_msg(result._result["msg"], failtype, 0)
elif result._result.get("stderr"):
subject = self.subject_msg(result._result["stderr"], failtype, -1)
elif result._result.get("stdout"):
subject = self.subject_msg(result._result["stdout"], failtype, -1)
elif result._result.get("exception"): # Unrelated exceptions are added to output :-/
subject = self.subject_msg(result._result["exception"], failtype, -1)
else:
subject = f'{failtype}: {result._task.name or result._task.action}'
subject = f"{failtype}: {result._task.name or result._task.action}"
# Make playbook name visible (e.g. in Outlook/Gmail condensed view)
body = f'Playbook: {os.path.basename(self.playbook._file_name)}\n'
body = f"Playbook: {os.path.basename(self.playbook._file_name)}\n"
if result._task.name:
body += f'Task: {result._task.name}\n'
body += f'Module: {result._task.action}\n'
body += f'Host: {host}\n'
body += '\n'
body += f"Task: {result._task.name}\n"
body += f"Module: {result._task.action}\n"
body += f"Host: {host}\n"
body += "\n"
# Add task information (as much as possible)
body += 'The following task failed:\n\n'
if 'invocation' in result._result:
body += self.indent(f"{result._task.action}: {json.dumps(result._result['invocation']['module_args'], indent=4)}\n")
body += "The following task failed:\n\n"
if "invocation" in result._result:
body += self.indent(
f"{result._task.action}: {json.dumps(result._result['invocation']['module_args'], indent=4)}\n"
)
elif result._task.name:
body += self.indent(f'{result._task.name} ({result._task.action})\n')
body += self.indent(f"{result._task.name} ({result._task.action})\n")
else:
body += self.indent(f'{result._task.action}\n')
body += '\n'
body += self.indent(f"{result._task.action}\n")
body += "\n"
# Add item / message
if self.itembody:
body += self.itembody
elif result._result.get('failed_when_result') is True:
fail_cond_list = '\n- '.join(result._task.failed_when)
elif result._result.get("failed_when_result") is True:
fail_cond_list = "\n- ".join(result._task.failed_when)
fail_cond = self.indent(f"failed_when:\n- {fail_cond_list}")
body += f"due to the following condition:\n\n{fail_cond}\n\n"
elif result._result.get('msg'):
body += self.body_blob(result._result['msg'], 'message')
elif result._result.get("msg"):
body += self.body_blob(result._result["msg"], "message")
# Add stdout / stderr / exception / warnings / deprecations
if result._result.get('stdout'):
body += self.body_blob(result._result['stdout'], 'standard output')
if result._result.get('stderr'):
body += self.body_blob(result._result['stderr'], 'error output')
if result._result.get('exception'): # Unrelated exceptions are added to output :-/
body += self.body_blob(result._result['exception'], 'exception')
if result._result.get('warnings'):
for i in range(len(result._result.get('warnings'))):
body += self.body_blob(result._result['warnings'][i], f'exception {i + 1}')
if result._result.get('deprecations'):
for i in range(len(result._result.get('deprecations'))):
body += self.body_blob(result._result['deprecations'][i], f'exception {i + 1}')
if result._result.get("stdout"):
body += self.body_blob(result._result["stdout"], "standard output")
if result._result.get("stderr"):
body += self.body_blob(result._result["stderr"], "error output")
if result._result.get("exception"): # Unrelated exceptions are added to output :-/
body += self.body_blob(result._result["exception"], "exception")
if result._result.get("warnings"):
for i in range(len(result._result.get("warnings"))):
body += self.body_blob(result._result["warnings"][i], f"exception {i + 1}")
if result._result.get("deprecations"):
for i in range(len(result._result.get("deprecations"))):
body += self.body_blob(result._result["deprecations"][i], f"exception {i + 1}")
body += 'and a complete dump of the error:\n\n'
body += self.indent(f'{failtype}: {json.dumps(result._result, cls=AnsibleJSONEncoder, indent=4)}')
body += "and a complete dump of the error:\n\n"
body += self.indent(f"{failtype}: {json.dumps(result._result, cls=AnsibleJSONEncoder, indent=4)}")
self.mail(subject=subject, body=body)
def v2_playbook_on_start(self, playbook):
self.playbook = playbook
self.itembody = ''
self.itembody = ""
def v2_runner_on_failed(self, result, ignore_errors=False):
if ignore_errors:
return
self.mail_result(result, 'Failed')
self.mail_result(result, "Failed")
def v2_runner_on_unreachable(self, result):
self.mail_result(result, 'Unreachable')
self.mail_result(result, "Unreachable")
def v2_runner_on_async_failed(self, result):
self.mail_result(result, 'Async failure')
self.mail_result(result, "Async failure")
def v2_runner_item_on_failed(self, result):
# Pass item information to task failure
self.itemsubject = result._result['msg']
self.itembody += self.body_blob(json.dumps(result._result, cls=AnsibleJSONEncoder, indent=4), f"failed item dump '{result._result['item']}'")
self.itemsubject = result._result["msg"]
self.itembody += self.body_blob(
json.dumps(result._result, cls=AnsibleJSONEncoder, indent=4), f"failed item dump '{result._result['item']}'"
)

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2018 Remi Verchere <remi@verchere.fr>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -74,13 +73,13 @@ from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
'''
"""
send ansible-playbook to Nagios server using nrdp protocol
'''
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.nrdp'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.nrdp"
CALLBACK_NEEDS_WHITELIST = True
# Nagios states
@@ -90,34 +89,35 @@ class CallbackModule(CallbackBase):
UNKNOWN = 3
def __init__(self):
super(CallbackModule, self).__init__()
super().__init__()
self.printed_playbook = False
self.playbook_name = None
self.play = None
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.url = self.get_option('url')
if not self.url.endswith('/'):
self.url += '/'
self.token = self.get_option('token')
self.hostname = self.get_option('hostname')
self.servicename = self.get_option('servicename')
self.validate_nrdp_certs = self.get_option('validate_certs')
self.url = self.get_option("url")
if not self.url.endswith("/"):
self.url += "/"
self.token = self.get_option("token")
self.hostname = self.get_option("hostname")
self.servicename = self.get_option("servicename")
self.validate_nrdp_certs = self.get_option("validate_certs")
if (self.url or self.token or self.hostname or
self.servicename) is None:
self._display.warning("NRDP callback wants the NRDP_URL,"
" NRDP_TOKEN, NRDP_HOSTNAME,"
" NRDP_SERVICENAME"
" environment variables'."
" The NRDP callback plugin is disabled.")
if (self.url or self.token or self.hostname or self.servicename) is None:
self._display.warning(
"NRDP callback wants the NRDP_URL,"
" NRDP_TOKEN, NRDP_HOSTNAME,"
" NRDP_SERVICENAME"
" environment variables'."
" The NRDP callback plugin is disabled."
)
self.disabled = True
def _send_nrdp(self, state, msg):
'''
"""
nrpd service check send XMLDATA like this:
<?xml version='1.0'?>
<checkresults>
@@ -128,7 +128,7 @@ class CallbackModule(CallbackBase):
<output>WARNING: Danger Will Robinson!|perfdata</output>
</checkresult>
</checkresults>
'''
"""
xmldata = "<?xml version='1.0'?>\n"
xmldata += "<checkresults>\n"
xmldata += "<checkresult type='service'>\n"
@@ -139,31 +139,24 @@ class CallbackModule(CallbackBase):
xmldata += "</checkresult>\n"
xmldata += "</checkresults>\n"
body = {
'cmd': 'submitcheck',
'token': self.token,
'XMLDATA': to_bytes(xmldata)
}
body = {"cmd": "submitcheck", "token": self.token, "XMLDATA": to_bytes(xmldata)}
try:
response = open_url(self.url,
data=urlencode(body),
method='POST',
validate_certs=self.validate_nrdp_certs)
response = open_url(self.url, data=urlencode(body), method="POST", validate_certs=self.validate_nrdp_certs)
return response.read()
except Exception as ex:
self._display.warning(f"NRDP callback cannot send result {ex}")
def v2_playbook_on_play_start(self, play):
'''
"""
Display Playbook and play start messages
'''
"""
self.play = play
def v2_playbook_on_stats(self, stats):
'''
"""
Display info about playbook statistics
'''
"""
name = self.play
gstats = ""
hosts = sorted(stats.processed.keys())
@@ -171,13 +164,14 @@ class CallbackModule(CallbackBase):
for host in hosts:
stat = stats.summarize(host)
gstats += (
f"'{host}_ok'={stat['ok']} '{host}_changed'={stat['changed']} '{host}_unreachable'={stat['unreachable']} '{host}_failed'={stat['failures']} "
f"'{host}_ok'={stat['ok']} '{host}_changed'={stat['changed']}"
f" '{host}_unreachable'={stat['unreachable']} '{host}_failed'={stat['failures']} "
)
# Critical when failed tasks or unreachable host
critical += stat['failures']
critical += stat['unreachable']
critical += stat["failures"]
critical += stat["unreachable"]
# Warning when changed tasks
warning += stat['changed']
warning += stat["changed"]
msg = f"{name} | {gstats}"
if critical:

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -21,11 +20,10 @@ from ansible.plugins.callback import CallbackBase
class CallbackModule(CallbackBase):
'''
"""
This callback won't print messages to stdout when new callback events are received.
'''
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.null'
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.null"

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2021, Victor Martinez <VictorMartinezRubio@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -146,22 +145,18 @@ from ansible.errors import AnsibleError
from ansible.module_utils.ansible_release import __version__ as ansible_version
from ansible.plugins.callback import CallbackBase
OTEL_LIBRARY_IMPORT_ERROR: ImportError | None
try:
from opentelemetry import trace
from opentelemetry.trace import SpanKind
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter as GRPCOTLPSpanExporter
from opentelemetry.exporter.otlp.proto.http.trace_exporter import OTLPSpanExporter as HTTPOTLPSpanExporter
from opentelemetry.sdk.resources import SERVICE_NAME, Resource
from opentelemetry.trace.status import Status, StatusCode
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import (
BatchSpanProcessor,
SimpleSpanProcessor
)
from opentelemetry.sdk.trace.export.in_memory_span_exporter import (
InMemorySpanExporter
)
from opentelemetry.sdk.trace.export import BatchSpanProcessor, SimpleSpanProcessor
from opentelemetry.sdk.trace.export.in_memory_span_exporter import InMemorySpanExporter
from opentelemetry.trace import SpanKind
from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator
from opentelemetry.trace.status import Status, StatusCode
except ImportError as imp_exc:
OTEL_LIBRARY_IMPORT_ERROR = imp_exc
else:
@@ -186,9 +181,9 @@ class TaskData:
def add_host(self, host):
if host.uuid in self.host_data:
if host.status == 'included':
if host.status == "included":
# concatenate task include output from multiple items
host.result = f'{self.host_data[host.uuid].result}\n{host.result}'
host.result = f"{self.host_data[host.uuid].result}\n{host.result}"
else:
return
@@ -208,14 +203,14 @@ class HostData:
self.finish = time_ns()
class OpenTelemetrySource(object):
class OpenTelemetrySource:
def __init__(self, display):
self.ansible_playbook = ""
self.session = str(uuid.uuid4())
self.host = socket.gethostname()
try:
self.ip_address = socket.gethostbyname(socket.gethostname())
except Exception as e:
except Exception:
self.ip_address = None
self.user = getpass.getuser()
@@ -223,11 +218,11 @@ class OpenTelemetrySource(object):
def traceparent_context(self, traceparent):
carrier = dict()
carrier['traceparent'] = traceparent
carrier["traceparent"] = traceparent
return TraceContextTextMapPropagator().extract(carrier=carrier)
def start_task(self, tasks_data, hide_task_arguments, play_name, task):
""" record the start of a task for one or more hosts """
"""record the start of a task for one or more hosts"""
uuid = task._uuid
@@ -245,53 +240,51 @@ class OpenTelemetrySource(object):
tasks_data[uuid] = TaskData(uuid, name, path, play_name, action, args)
def finish_task(self, tasks_data, status, result, dump):
""" record the results of a task for a single host """
"""record the results of a task for a single host"""
task_uuid = result._task._uuid
if hasattr(result, '_host') and result._host is not None:
if hasattr(result, "_host") and result._host is not None:
host_uuid = result._host._uuid
host_name = result._host.name
else:
host_uuid = 'include'
host_name = 'include'
host_uuid = "include"
host_name = "include"
task = tasks_data[task_uuid]
task.dump = dump
task.add_host(HostData(host_uuid, host_name, status, result))
def generate_distributed_traces(self,
otel_service_name,
ansible_playbook,
tasks_data,
status,
traceparent,
disable_logs,
disable_attributes_in_logs,
otel_exporter_otlp_traces_protocol,
store_spans_in_file):
""" generate distributed traces from the collected TaskData and HostData """
def generate_distributed_traces(
self,
otel_service_name,
ansible_playbook,
tasks_data,
status,
traceparent,
disable_logs,
disable_attributes_in_logs,
otel_exporter_otlp_traces_protocol,
store_spans_in_file,
):
"""generate distributed traces from the collected TaskData and HostData"""
tasks = []
parent_start_time = None
for task_uuid, task in tasks_data.items():
for task in tasks_data.values():
if parent_start_time is None:
parent_start_time = task.start
tasks.append(task)
trace.set_tracer_provider(
TracerProvider(
resource=Resource.create({SERVICE_NAME: otel_service_name})
)
)
trace.set_tracer_provider(TracerProvider(resource=Resource.create({SERVICE_NAME: otel_service_name})))
otel_exporter = None
if store_spans_in_file:
otel_exporter = InMemorySpanExporter()
processor = SimpleSpanProcessor(otel_exporter)
else:
if otel_exporter_otlp_traces_protocol == 'grpc':
if otel_exporter_otlp_traces_protocol == "grpc":
otel_exporter = GRPCOTLPSpanExporter()
else:
otel_exporter = HTTPOTLPSpanExporter()
@@ -301,8 +294,12 @@ class OpenTelemetrySource(object):
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span(ansible_playbook, context=self.traceparent_context(traceparent),
start_time=parent_start_time, kind=SpanKind.SERVER) as parent:
with tracer.start_as_current_span(
ansible_playbook,
context=self.traceparent_context(traceparent),
start_time=parent_start_time,
kind=SpanKind.SERVER,
) as parent:
parent.set_status(status)
# Populate trace metadata attributes
parent.set_attribute("ansible.version", ansible_version)
@@ -312,43 +309,45 @@ class OpenTelemetrySource(object):
parent.set_attribute("ansible.host.ip", self.ip_address)
parent.set_attribute("ansible.host.user", self.user)
for task in tasks:
for host_uuid, host_data in task.host_data.items():
for host_data in task.host_data.values():
with tracer.start_as_current_span(task.name, start_time=task.start, end_on_exit=False) as span:
self.update_span_data(task, host_data, span, disable_logs, disable_attributes_in_logs)
return otel_exporter
def update_span_data(self, task_data, host_data, span, disable_logs, disable_attributes_in_logs):
""" update the span with the given TaskData and HostData """
"""update the span with the given TaskData and HostData"""
name = f'[{host_data.name}] {task_data.play}: {task_data.name}'
name = f"[{host_data.name}] {task_data.play}: {task_data.name}"
message = 'success'
message = "success"
res = {}
rc = 0
status = Status(status_code=StatusCode.OK)
if host_data.status != 'included':
if host_data.status != "included":
# Support loops
enriched_error_message = None
if 'results' in host_data.result._result:
if host_data.status == 'failed':
message = self.get_error_message_from_results(host_data.result._result['results'], task_data.action)
enriched_error_message = self.enrich_error_message_from_results(host_data.result._result['results'], task_data.action)
if "results" in host_data.result._result:
if host_data.status == "failed":
message = self.get_error_message_from_results(host_data.result._result["results"], task_data.action)
enriched_error_message = self.enrich_error_message_from_results(
host_data.result._result["results"], task_data.action
)
else:
res = host_data.result._result
rc = res.get('rc', 0)
if host_data.status == 'failed':
rc = res.get("rc", 0)
if host_data.status == "failed":
message = self.get_error_message(res)
enriched_error_message = self.enrich_error_message(res)
if host_data.status == 'failed':
if host_data.status == "failed":
status = Status(status_code=StatusCode.ERROR, description=message)
# Record an exception with the task message
span.record_exception(BaseException(enriched_error_message))
elif host_data.status == 'skipped':
message = res['skip_reason'] if 'skip_reason' in res else 'skipped'
elif host_data.status == "skipped":
message = res["skip_reason"] if "skip_reason" in res else "skipped"
status = Status(status_code=StatusCode.UNSET)
elif host_data.status == 'ignored':
elif host_data.status == "ignored":
status = Status(status_code=StatusCode.UNSET)
span.set_status(status)
@@ -360,7 +359,7 @@ class OpenTelemetrySource(object):
"ansible.task.name": name,
"ansible.task.result": rc,
"ansible.task.host.name": host_data.name,
"ansible.task.host.status": host_data.status
"ansible.task.host.status": host_data.status,
}
if isinstance(task_data.args, dict) and "gather_facts" not in task_data.action:
names = tuple(self.transform_ansible_unicode_to_str(k) for k in task_data.args.keys())
@@ -380,10 +379,10 @@ class OpenTelemetrySource(object):
span.end(end_time=host_data.finish)
def set_span_attributes(self, span, attributes):
""" update the span attributes with the given attributes if not None """
"""update the span attributes with the given attributes if not None"""
if span is None and self._display is not None:
self._display.warning('span object is None. Please double check if that is expected.')
self._display.warning("span object is None. Please double check if that is expected.")
else:
if attributes is not None:
span.set_attributes(attributes)
@@ -411,7 +410,18 @@ class OpenTelemetrySource(object):
@staticmethod
def url_from_args(args):
# the order matters
url_args = ("url", "api_url", "baseurl", "repo", "server_url", "chart_repo_url", "registry_url", "endpoint", "uri", "updates_url")
url_args = (
"url",
"api_url",
"baseurl",
"repo",
"server_url",
"chart_repo_url",
"registry_url",
"endpoint",
"uri",
"updates_url",
)
for arg in url_args:
if args is not None and args.get(arg):
return args.get(arg)
@@ -436,33 +446,33 @@ class OpenTelemetrySource(object):
@staticmethod
def get_error_message(result):
if result.get('exception') is not None:
return OpenTelemetrySource._last_line(result['exception'])
return result.get('msg', 'failed')
if result.get("exception") is not None:
return OpenTelemetrySource._last_line(result["exception"])
return result.get("msg", "failed")
@staticmethod
def get_error_message_from_results(results, action):
for result in results:
if result.get('failed', False):
if result.get("failed", False):
return f"{action}({result.get('item', 'none')}) - {OpenTelemetrySource.get_error_message(result)}"
@staticmethod
def _last_line(text):
lines = text.strip().split('\n')
lines = text.strip().split("\n")
return lines[-1]
@staticmethod
def enrich_error_message(result):
message = result.get('msg', 'failed')
exception = result.get('exception')
stderr = result.get('stderr')
return f"message: \"{message}\"\nexception: \"{exception}\"\nstderr: \"{stderr}\""
message = result.get("msg", "failed")
exception = result.get("exception")
stderr = result.get("stderr")
return f'message: "{message}"\nexception: "{exception}"\nstderr: "{stderr}"'
@staticmethod
def enrich_error_message_from_results(results, action):
message = ""
for result in results:
if result.get('failed', False):
if result.get("failed", False):
message = f"{action}({result.get('item', 'none')}) - {OpenTelemetrySource.enrich_error_message(result)}\n{message}"
return message
@@ -473,12 +483,12 @@ class CallbackModule(CallbackBase):
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.opentelemetry'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.opentelemetry"
CALLBACK_NEEDS_ENABLED = True
def __init__(self, display=None):
super(CallbackModule, self).__init__(display=display)
super().__init__(display=display)
self.hide_task_arguments = None
self.disable_attributes_in_logs = None
self.disable_logs = None
@@ -494,7 +504,7 @@ class CallbackModule(CallbackBase):
if OTEL_LIBRARY_IMPORT_ERROR:
raise AnsibleError(
'The `opentelemetry-api`, `opentelemetry-exporter-otlp` or `opentelemetry-sdk` must be installed to use this plugin'
"The `opentelemetry-api`, `opentelemetry-exporter-otlp` or `opentelemetry-sdk` must be installed to use this plugin"
) from OTEL_LIBRARY_IMPORT_ERROR
self.tasks_data = OrderedDict()
@@ -502,37 +512,35 @@ class CallbackModule(CallbackBase):
self.opentelemetry = OpenTelemetrySource(display=self._display)
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys,
var_options=var_options,
direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
environment_variable = self.get_option('enable_from_environment')
if environment_variable is not None and os.environ.get(environment_variable, 'false').lower() != 'true':
environment_variable = self.get_option("enable_from_environment")
if environment_variable is not None and os.environ.get(environment_variable, "false").lower() != "true":
self.disabled = True
self._display.warning(
f"The `enable_from_environment` option has been set and {environment_variable} is not enabled. Disabling the `opentelemetry` callback plugin."
)
self.hide_task_arguments = self.get_option('hide_task_arguments')
self.hide_task_arguments = self.get_option("hide_task_arguments")
self.disable_attributes_in_logs = self.get_option('disable_attributes_in_logs')
self.disable_attributes_in_logs = self.get_option("disable_attributes_in_logs")
self.disable_logs = self.get_option('disable_logs')
self.disable_logs = self.get_option("disable_logs")
self.store_spans_in_file = self.get_option('store_spans_in_file')
self.store_spans_in_file = self.get_option("store_spans_in_file")
self.otel_service_name = self.get_option('otel_service_name')
self.otel_service_name = self.get_option("otel_service_name")
if not self.otel_service_name:
self.otel_service_name = 'ansible'
self.otel_service_name = "ansible"
# See https://github.com/open-telemetry/opentelemetry-specification/issues/740
self.traceparent = self.get_option('traceparent')
self.traceparent = self.get_option("traceparent")
self.otel_exporter_otlp_traces_protocol = self.get_option('otel_exporter_otlp_traces_protocol')
self.otel_exporter_otlp_traces_protocol = self.get_option("otel_exporter_otlp_traces_protocol")
def dump_results(self, task, result):
""" dump the results if disable_logs is not enabled """
"""dump the results if disable_logs is not enabled"""
if self.disable_logs:
return ""
# ansible.builtin.uri contains the response in the json field
@@ -552,74 +560,40 @@ class CallbackModule(CallbackBase):
self.play_name = play.get_name()
def v2_runner_on_no_hosts(self, task):
self.opentelemetry.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
self.opentelemetry.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
def v2_playbook_on_task_start(self, task, is_conditional):
self.opentelemetry.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
self.opentelemetry.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
def v2_playbook_on_cleanup_task_start(self, task):
self.opentelemetry.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
self.opentelemetry.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
def v2_playbook_on_handler_task_start(self, task):
self.opentelemetry.start_task(
self.tasks_data,
self.hide_task_arguments,
self.play_name,
task
)
self.opentelemetry.start_task(self.tasks_data, self.hide_task_arguments, self.play_name, task)
def v2_runner_on_failed(self, result, ignore_errors=False):
if ignore_errors:
status = 'ignored'
status = "ignored"
else:
status = 'failed'
status = "failed"
self.errors += 1
self.opentelemetry.finish_task(
self.tasks_data,
status,
result,
self.dump_results(self.tasks_data[result._task._uuid], result)
self.tasks_data, status, result, self.dump_results(self.tasks_data[result._task._uuid], result)
)
def v2_runner_on_ok(self, result):
self.opentelemetry.finish_task(
self.tasks_data,
'ok',
result,
self.dump_results(self.tasks_data[result._task._uuid], result)
self.tasks_data, "ok", result, self.dump_results(self.tasks_data[result._task._uuid], result)
)
def v2_runner_on_skipped(self, result):
self.opentelemetry.finish_task(
self.tasks_data,
'skipped',
result,
self.dump_results(self.tasks_data[result._task._uuid], result)
self.tasks_data, "skipped", result, self.dump_results(self.tasks_data[result._task._uuid], result)
)
def v2_playbook_on_include(self, included_file):
self.opentelemetry.finish_task(
self.tasks_data,
'included',
included_file,
""
)
self.opentelemetry.finish_task(self.tasks_data, "included", included_file, "")
def v2_playbook_on_stats(self, stats):
if self.errors == 0:
@@ -635,7 +609,7 @@ class CallbackModule(CallbackBase):
self.disable_logs,
self.disable_attributes_in_logs,
self.otel_exporter_otlp_traces_protocol,
self.store_spans_in_file
self.store_spans_in_file,
)
if self.store_spans_in_file:

View File

@@ -1,10 +1,8 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2025, Max Mitschke <maxmitschke@fastmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
from __future__ import annotations
DOCUMENTATION = r"""
name: print_task
@@ -24,13 +22,13 @@ ansible.cfg: |-
callbacks_enabled=community.general.print_task
"""
from yaml import load, dump
from yaml import dump, load
try:
from yaml import CSafeDumper as SafeDumper
from yaml import CSafeLoader as SafeLoader
except ImportError:
from yaml import SafeDumper, SafeLoader
from yaml import SafeDumper, SafeLoader # type: ignore
from ansible.plugins.callback import CallbackBase
@@ -39,18 +37,19 @@ class CallbackModule(CallbackBase):
"""
This callback module tells you how long your plays ran for.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'aggregate'
CALLBACK_NAME = 'community.general.print_task'
CALLBACK_TYPE = "aggregate"
CALLBACK_NAME = "community.general.print_task"
CALLBACK_NEEDS_ENABLED = True
def __init__(self):
super(CallbackModule, self).__init__()
super().__init__()
self._printed_message = False
def _print_task(self, task):
if hasattr(task, '_ds'):
if hasattr(task, "_ds"):
task_snippet = load(str([task._ds.copy()]), Loader=SafeLoader)
task_yaml = dump(task_snippet, sort_keys=False, Dumper=SafeDumper)
self._display.display(f"\n{task_yaml}\n")

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2012, Michael DeHaan, <michael.dehaan@gmail.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -19,9 +18,9 @@ description:
- This plugin uses C(say) or C(espeak) to "speak" about play events.
"""
import os
import platform
import subprocess
import os
from ansible.module_utils.common.process import get_bin_path
from ansible.plugins.callback import CallbackBase
@@ -31,14 +30,14 @@ class CallbackModule(CallbackBase):
"""
makes Ansible much more exciting.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.say'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.say"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
super(CallbackModule, self).__init__()
super().__init__()
self.FAILED_VOICE = None
self.REGULAR_VOICE = None
@@ -46,21 +45,23 @@ class CallbackModule(CallbackBase):
self.LASER_VOICE = None
try:
self.synthesizer = get_bin_path('say')
if platform.system() != 'Darwin':
self.synthesizer = get_bin_path("say")
if platform.system() != "Darwin":
# 'say' binary available, it might be GNUstep tool which doesn't support 'voice' parameter
self._display.warning(f"'say' executable found but system is '{platform.system()}': ignoring voice parameter")
self._display.warning(
f"'say' executable found but system is '{platform.system()}': ignoring voice parameter"
)
else:
self.FAILED_VOICE = 'Zarvox'
self.REGULAR_VOICE = 'Trinoids'
self.HAPPY_VOICE = 'Cellos'
self.LASER_VOICE = 'Princess'
self.FAILED_VOICE = "Zarvox"
self.REGULAR_VOICE = "Trinoids"
self.HAPPY_VOICE = "Cellos"
self.LASER_VOICE = "Princess"
except ValueError:
try:
self.synthesizer = get_bin_path('espeak')
self.FAILED_VOICE = 'klatt'
self.HAPPY_VOICE = 'f5'
self.LASER_VOICE = 'whisper'
self.synthesizer = get_bin_path("espeak")
self.FAILED_VOICE = "klatt"
self.HAPPY_VOICE = "f5"
self.LASER_VOICE = "whisper"
except ValueError:
self.synthesizer = None
@@ -68,12 +69,14 @@ class CallbackModule(CallbackBase):
# ansible will not call any callback if disabled is set to True
if not self.synthesizer:
self.disabled = True
self._display.warning(f"Unable to find either 'say' or 'espeak' executable, plugin {os.path.basename(__file__)} disabled")
self._display.warning(
f"Unable to find either 'say' or 'espeak' executable, plugin {os.path.basename(__file__)} disabled"
)
def say(self, msg, voice):
cmd = [self.synthesizer, msg]
if voice:
cmd.extend(('-v', voice))
cmd.extend(("-v", voice))
subprocess.call(cmd)
def runner_on_failed(self, host, res, ignore_errors=False):

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) Fastly, inc 2016
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -40,20 +39,19 @@ EXAMPLES = r"""
import difflib
from ansible import constants as C
from ansible.plugins.callback import CallbackBase
from ansible.module_utils.common.text.converters import to_text
from ansible.plugins.callback import CallbackBase
DONT_COLORIZE = False
COLORS = {
'normal': '\033[0m',
'ok': f'\x1b[{C.COLOR_CODES[C.COLOR_OK]}m',
'bold': '\033[1m',
'not_so_bold': '\033[1m\033[34m',
'changed': f'\x1b[{C.COLOR_CODES[C.COLOR_CHANGED]}m',
'failed': f'\x1b[{C.COLOR_CODES[C.COLOR_ERROR]}m',
'endc': '\033[0m',
'skipped': f'\x1b[{C.COLOR_CODES[C.COLOR_SKIP]}m',
"normal": "\033[0m",
"ok": f"\x1b[{C.COLOR_CODES[C.COLOR_OK]}m", # type: ignore
"bold": "\033[1m",
"not_so_bold": "\033[1m\033[34m",
"changed": f"\x1b[{C.COLOR_CODES[C.COLOR_CHANGED]}m", # type: ignore
"failed": f"\x1b[{C.COLOR_CODES[C.COLOR_ERROR]}m", # type: ignore
"endc": "\033[0m",
"skipped": f"\x1b[{C.COLOR_CODES[C.COLOR_SKIP]}m", # type: ignore
}
@@ -79,22 +77,21 @@ class CallbackModule(CallbackBase):
"""selective.py callback plugin."""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.selective'
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.selective"
def __init__(self, display=None):
"""selective.py callback plugin."""
super(CallbackModule, self).__init__(display)
super().__init__(display)
self.last_skipped = False
self.last_task_name = None
self.printed_last_task = False
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
global DONT_COLORIZE
DONT_COLORIZE = self.get_option('nocolor')
DONT_COLORIZE = self.get_option("nocolor")
def _print_task(self, task_name=None):
if task_name is None:
@@ -106,7 +103,7 @@ class CallbackModule(CallbackBase):
if self.last_skipped:
print()
line = f"# {task_name} "
msg = colorize(f"{line}{'*' * (line_length - len(line))}", 'bold')
msg = colorize(f"{line}{'*' * (line_length - len(line))}", "bold")
print(msg)
def _indent_text(self, text, indent_level):
@@ -114,48 +111,51 @@ class CallbackModule(CallbackBase):
result_lines = []
for l in lines:
result_lines.append(f"{' ' * indent_level}{l}")
return '\n'.join(result_lines)
return "\n".join(result_lines)
def _print_diff(self, diff, indent_level):
if isinstance(diff, dict):
try:
diff = '\n'.join(difflib.unified_diff(diff['before'].splitlines(),
diff['after'].splitlines(),
fromfile=diff.get('before_header',
'new_file'),
tofile=diff['after_header']))
diff = "\n".join(
difflib.unified_diff(
diff["before"].splitlines(),
diff["after"].splitlines(),
fromfile=diff.get("before_header", "new_file"),
tofile=diff["after_header"],
)
)
except AttributeError:
diff = dict_diff(diff['before'], diff['after'])
diff = dict_diff(diff["before"], diff["after"])
if diff:
diff = colorize(str(diff), 'changed')
diff = colorize(str(diff), "changed")
print(self._indent_text(diff, indent_level + 4))
def _print_host_or_item(self, host_or_item, changed, msg, diff, is_host, error, stdout, stderr):
if is_host:
indent_level = 0
name = colorize(host_or_item.name, 'not_so_bold')
name = colorize(host_or_item.name, "not_so_bold")
else:
indent_level = 4
if isinstance(host_or_item, dict):
if 'key' in host_or_item.keys():
host_or_item = host_or_item['key']
name = colorize(to_text(host_or_item), 'bold')
if "key" in host_or_item.keys():
host_or_item = host_or_item["key"]
name = colorize(to_text(host_or_item), "bold")
if error:
color = 'failed'
change_string = colorize('FAILED!!!', color)
color = "failed"
change_string = colorize("FAILED!!!", color)
else:
color = 'changed' if changed else 'ok'
color = "changed" if changed else "ok"
change_string = colorize(f"changed={changed}", color)
msg = colorize(msg, color)
line_length = 120
spaces = ' ' * (40 - len(name) - indent_level)
spaces = " " * (40 - len(name) - indent_level)
line = f"{' ' * indent_level} * {name}{spaces}- {change_string}"
if len(msg) < 50:
line += f' -- {msg}'
line += f" -- {msg}"
print(f"{line} {'-' * (line_length - len(line))}---------")
else:
print(f"{line} {'-' * (line_length - len(line))}")
@@ -164,10 +164,10 @@ class CallbackModule(CallbackBase):
if diff:
self._print_diff(diff, indent_level)
if stdout:
stdout = colorize(stdout, 'failed')
stdout = colorize(stdout, "failed")
print(self._indent_text(stdout, indent_level + 4))
if stderr:
stderr = colorize(stderr, 'failed')
stderr = colorize(stderr, "failed")
print(self._indent_text(stderr, indent_level + 4))
def v2_playbook_on_play_start(self, play):
@@ -182,61 +182,61 @@ class CallbackModule(CallbackBase):
def _print_task_result(self, result, error=False, **kwargs):
"""Run when a task finishes correctly."""
if 'print_action' in result._task.tags or error or self._display.verbosity > 1:
if "print_action" in result._task.tags or error or self._display.verbosity > 1:
self._print_task()
self.last_skipped = False
msg = to_text(result._result.get('msg', '')) or\
to_text(result._result.get('reason', ''))
msg = to_text(result._result.get("msg", "")) or to_text(result._result.get("reason", ""))
stderr = [result._result.get('exception', None),
result._result.get('module_stderr', None)]
stderr = [result._result.get("exception", None), result._result.get("module_stderr", None)]
stderr = "\n".join([e for e in stderr if e]).strip()
self._print_host_or_item(result._host,
result._result.get('changed', False),
msg,
result._result.get('diff', None),
is_host=True,
error=error,
stdout=result._result.get('module_stdout', None),
stderr=stderr.strip(),
)
if 'results' in result._result:
for r in result._result['results']:
failed = 'failed' in r and r['failed']
self._print_host_or_item(
result._host,
result._result.get("changed", False),
msg,
result._result.get("diff", None),
is_host=True,
error=error,
stdout=result._result.get("module_stdout", None),
stderr=stderr.strip(),
)
if "results" in result._result:
for r in result._result["results"]:
failed = "failed" in r and r["failed"]
stderr = [r.get('exception', None), r.get('module_stderr', None)]
stderr = [r.get("exception", None), r.get("module_stderr", None)]
stderr = "\n".join([e for e in stderr if e]).strip()
self._print_host_or_item(r[r['ansible_loop_var']],
r.get('changed', False),
to_text(r.get('msg', '')),
r.get('diff', None),
is_host=False,
error=failed,
stdout=r.get('module_stdout', None),
stderr=stderr.strip(),
)
self._print_host_or_item(
r[r["ansible_loop_var"]],
r.get("changed", False),
to_text(r.get("msg", "")),
r.get("diff", None),
is_host=False,
error=failed,
stdout=r.get("module_stdout", None),
stderr=stderr.strip(),
)
else:
self.last_skipped = True
print('.', end="")
print(".", end="")
def v2_playbook_on_stats(self, stats):
"""Display info about playbook statistics."""
print()
self.printed_last_task = False
self._print_task('STATS')
self._print_task("STATS")
hosts = sorted(stats.processed.keys())
for host in hosts:
s = stats.summarize(host)
if s['failures'] or s['unreachable']:
color = 'failed'
elif s['changed']:
color = 'changed'
if s["failures"] or s["unreachable"]:
color = "failed"
elif s["changed"]:
color = "changed"
else:
color = 'ok'
color = "ok"
msg = (
f"{host} : ok={s['ok']}\tchanged={s['changed']}\tfailed={s['failures']}\tunreachable="
@@ -251,14 +251,13 @@ class CallbackModule(CallbackBase):
self.last_skipped = False
line_length = 120
spaces = ' ' * (31 - len(result._host.name) - 4)
spaces = " " * (31 - len(result._host.name) - 4)
line = f" * {colorize(result._host.name, 'not_so_bold')}{spaces}- {colorize('skipped', 'skipped')}"
reason = result._result.get('skipped_reason', '') or \
result._result.get('skip_reason', '')
reason = result._result.get("skipped_reason", "") or result._result.get("skip_reason", "")
if len(reason) < 50:
line += f' -- {reason}'
line += f" -- {reason}"
print(f"{line} {'-' * (line_length - len(line))}---------")
else:
print(f"{line} {'-' * (line_length - len(line))}")

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2014-2015, Matt Martz <matt@sivel.net>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -71,6 +70,7 @@ from ansible.plugins.callback import CallbackBase
try:
import prettytable
HAS_PRETTYTABLE = True
except ImportError:
HAS_PRETTYTABLE = False
@@ -80,20 +80,20 @@ class CallbackModule(CallbackBase):
"""This is an ansible callback plugin that sends status
updates to a Slack channel during playbook execution.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.slack'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.slack"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super(CallbackModule, self).__init__(display=display)
super().__init__(display=display)
if not HAS_PRETTYTABLE:
self.disabled = True
self._display.warning('The `prettytable` python module is not '
'installed. Disabling the Slack callback '
'plugin.')
self._display.warning(
"The `prettytable` python module is not installed. Disabling the Slack callback plugin."
)
self.playbook_name = None
@@ -103,34 +103,34 @@ class CallbackModule(CallbackBase):
self.guid = uuid.uuid4().hex[:6]
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.webhook_url = self.get_option('webhook_url')
self.channel = self.get_option('channel')
self.username = self.get_option('username')
self.show_invocation = (self._display.verbosity > 1)
self.validate_certs = self.get_option('validate_certs')
self.http_agent = self.get_option('http_agent')
self.webhook_url = self.get_option("webhook_url")
self.channel = self.get_option("channel")
self.username = self.get_option("username")
self.show_invocation = self._display.verbosity > 1
self.validate_certs = self.get_option("validate_certs")
self.http_agent = self.get_option("http_agent")
if self.webhook_url is None:
self.disabled = True
self._display.warning('Slack Webhook URL was not provided. The '
'Slack Webhook URL can be provided using '
'the `SLACK_WEBHOOK_URL` environment '
'variable.')
self._display.warning(
"Slack Webhook URL was not provided. The "
"Slack Webhook URL can be provided using "
"the `SLACK_WEBHOOK_URL` environment "
"variable."
)
def send_msg(self, attachments):
headers = {
'Content-type': 'application/json',
"Content-type": "application/json",
}
payload = {
'channel': self.channel,
'username': self.username,
'attachments': attachments,
'parse': 'none',
'icon_url': ('https://cdn2.hubspot.net/hub/330046/'
'file-449187601-png/ansible_badge.png'),
"channel": self.channel,
"username": self.username,
"attachments": attachments,
"parse": "none",
"icon_url": ("https://cdn2.hubspot.net/hub/330046/file-449187601-png/ansible_badge.png"),
}
data = json.dumps(payload)
@@ -146,67 +146,63 @@ class CallbackModule(CallbackBase):
)
return response.read()
except Exception as e:
self._display.warning(f'Could not submit message to Slack: {e}')
self._display.warning(f"Could not submit message to Slack: {e}")
def v2_playbook_on_start(self, playbook):
self.playbook_name = os.path.basename(playbook._file_name)
title = [
f'*Playbook initiated* (_{self.guid}_)'
]
title = [f"*Playbook initiated* (_{self.guid}_)"]
invocation_items = []
if context.CLIARGS and self.show_invocation:
tags = context.CLIARGS['tags']
skip_tags = context.CLIARGS['skip_tags']
extra_vars = context.CLIARGS['extra_vars']
subset = context.CLIARGS['subset']
inventory = [os.path.abspath(i) for i in context.CLIARGS['inventory']]
tags = context.CLIARGS["tags"]
skip_tags = context.CLIARGS["skip_tags"]
extra_vars = context.CLIARGS["extra_vars"]
subset = context.CLIARGS["subset"]
inventory = [os.path.abspath(i) for i in context.CLIARGS["inventory"]]
invocation_items.append(f"Inventory: {', '.join(inventory)}")
if tags and tags != ['all']:
if tags and tags != ["all"]:
invocation_items.append(f"Tags: {', '.join(tags)}")
if skip_tags:
invocation_items.append(f"Skip Tags: {', '.join(skip_tags)}")
if subset:
invocation_items.append(f'Limit: {subset}')
invocation_items.append(f"Limit: {subset}")
if extra_vars:
invocation_items.append(f"Extra Vars: {' '.join(extra_vars)}")
title.append(f"by *{context.CLIARGS['remote_user']}*")
title.append(f'\n\n*{self.playbook_name}*')
msg_items = [' '.join(title)]
title.append(f"\n\n*{self.playbook_name}*")
msg_items = [" ".join(title)]
if invocation_items:
_inv_item = '\n'.join(invocation_items)
msg_items.append(f'```\n{_inv_item}\n```')
_inv_item = "\n".join(invocation_items)
msg_items.append(f"```\n{_inv_item}\n```")
msg = '\n'.join(msg_items)
msg = "\n".join(msg_items)
attachments = [{
'fallback': msg,
'fields': [
{
'value': msg
}
],
'color': 'warning',
'mrkdwn_in': ['text', 'fallback', 'fields'],
}]
attachments = [
{
"fallback": msg,
"fields": [{"value": msg}],
"color": "warning",
"mrkdwn_in": ["text", "fallback", "fields"],
}
]
self.send_msg(attachments=attachments)
def v2_playbook_on_play_start(self, play):
"""Display Play start messages"""
name = play.name or f'Play name not specified ({play._uuid})'
msg = f'*Starting play* (_{self.guid}_)\n\n*{name}*'
name = play.name or f"Play name not specified ({play._uuid})"
msg = f"*Starting play* (_{self.guid}_)\n\n*{name}*"
attachments = [
{
'fallback': msg,
'text': msg,
'color': 'warning',
'mrkdwn_in': ['text', 'fallback', 'fields'],
"fallback": msg,
"text": msg,
"color": "warning",
"mrkdwn_in": ["text", "fallback", "fields"],
}
]
self.send_msg(attachments=attachments)
@@ -216,8 +212,7 @@ class CallbackModule(CallbackBase):
hosts = sorted(stats.processed.keys())
t = prettytable.PrettyTable(['Host', 'Ok', 'Changed', 'Unreachable',
'Failures', 'Rescued', 'Ignored'])
t = prettytable.PrettyTable(["Host", "Ok", "Changed", "Unreachable", "Failures", "Rescued", "Ignored"])
failures = False
unreachable = False
@@ -225,38 +220,28 @@ class CallbackModule(CallbackBase):
for h in hosts:
s = stats.summarize(h)
if s['failures'] > 0:
if s["failures"] > 0:
failures = True
if s['unreachable'] > 0:
if s["unreachable"] > 0:
unreachable = True
t.add_row([h] + [s[k] for k in ['ok', 'changed', 'unreachable',
'failures', 'rescued', 'ignored']])
t.add_row([h] + [s[k] for k in ["ok", "changed", "unreachable", "failures", "rescued", "ignored"]])
attachments = []
msg_items = [
f'*Playbook Complete* (_{self.guid}_)'
]
msg_items = [f"*Playbook Complete* (_{self.guid}_)"]
if failures or unreachable:
color = 'danger'
msg_items.append('\n*Failed!*')
color = "danger"
msg_items.append("\n*Failed!*")
else:
color = 'good'
msg_items.append('\n*Success!*')
color = "good"
msg_items.append("\n*Success!*")
msg_items.append(f'```\n{t}\n```')
msg_items.append(f"```\n{t}\n```")
msg = '\n'.join(msg_items)
msg = "\n".join(msg_items)
attachments.append({
'fallback': msg,
'fields': [
{
'value': msg
}
],
'color': color,
'mrkdwn_in': ['text', 'fallback', 'fields']
})
attachments.append(
{"fallback": msg, "fields": [{"value": msg}], "color": color, "mrkdwn_in": ["text", "fallback", "fields"]}
)
self.send_msg(attachments=attachments)

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -84,11 +83,10 @@ examples: >-
authtoken = f23blad6-5965-4537-bf69-5b5a545blabla88
"""
import json
import uuid
import socket
import getpass
import json
import socket
import uuid
from os.path import basename
from ansible.module_utils.ansible_release import __version__ as ansible_version
@@ -101,7 +99,7 @@ from ansible_collections.community.general.plugins.module_utils.datetime import
)
class SplunkHTTPCollectorSource(object):
class SplunkHTTPCollectorSource:
def __init__(self):
self.ansible_check_mode = False
self.ansible_playbook = ""
@@ -111,7 +109,7 @@ class SplunkHTTPCollectorSource(object):
self.user = getpass.getuser()
def send_event(self, url, authtoken, validate_certs, include_milliseconds, batch, state, result, runtime):
if result._task_fields['args'].get('_ansible_check_mode') is True:
if result._task_fields["args"].get("_ansible_check_mode") is True:
self.ansible_check_mode = True
if result._task._role:
@@ -119,33 +117,33 @@ class SplunkHTTPCollectorSource(object):
else:
ansible_role = None
if 'args' in result._task_fields:
del result._task_fields['args']
if "args" in result._task_fields:
del result._task_fields["args"]
data = {}
data['uuid'] = result._task._uuid
data['session'] = self.session
data["uuid"] = result._task._uuid
data["session"] = self.session
if batch is not None:
data['batch'] = batch
data['status'] = state
data["batch"] = batch
data["status"] = state
if include_milliseconds:
time_format = '%Y-%m-%d %H:%M:%S.%f +0000'
time_format = "%Y-%m-%d %H:%M:%S.%f +0000"
else:
time_format = '%Y-%m-%d %H:%M:%S +0000'
time_format = "%Y-%m-%d %H:%M:%S +0000"
data['timestamp'] = now().strftime(time_format)
data['host'] = self.host
data['ip_address'] = self.ip_address
data['user'] = self.user
data['runtime'] = runtime
data['ansible_version'] = ansible_version
data['ansible_check_mode'] = self.ansible_check_mode
data['ansible_host'] = result._host.name
data['ansible_playbook'] = self.ansible_playbook
data['ansible_role'] = ansible_role
data['ansible_task'] = result._task_fields
data['ansible_result'] = result._result
data["timestamp"] = now().strftime(time_format)
data["host"] = self.host
data["ip_address"] = self.ip_address
data["user"] = self.user
data["runtime"] = runtime
data["ansible_version"] = ansible_version
data["ansible_check_mode"] = self.ansible_check_mode
data["ansible_host"] = result._host.name
data["ansible_playbook"] = self.ansible_playbook
data["ansible_role"] = ansible_role
data["ansible_task"] = result._task_fields
data["ansible_result"] = result._result
# This wraps the json payload in and outer json event needed by Splunk
jsondata = json.dumps({"event": data}, cls=AnsibleJSONEncoder, sort_keys=True)
@@ -153,23 +151,20 @@ class SplunkHTTPCollectorSource(object):
open_url(
url,
jsondata,
headers={
'Content-type': 'application/json',
'Authorization': f"Splunk {authtoken}"
},
method='POST',
validate_certs=validate_certs
headers={"Content-type": "application/json", "Authorization": f"Splunk {authtoken}"},
method="POST",
validate_certs=validate_certs,
)
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.splunk'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.splunk"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super(CallbackModule, self).__init__(display=display)
super().__init__(display=display)
self.start_datetimes = {} # Collect task start times
self.url = None
self.authtoken = None
@@ -179,41 +174,40 @@ class CallbackModule(CallbackBase):
self.splunk = SplunkHTTPCollectorSource()
def _runtime(self, result):
return (
now() -
self.start_datetimes[result._task._uuid]
).total_seconds()
return (now() - self.start_datetimes[result._task._uuid]).total_seconds()
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys,
var_options=var_options,
direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.url = self.get_option('url')
self.url = self.get_option("url")
if self.url is None:
self.disabled = True
self._display.warning('Splunk HTTP collector source URL was '
'not provided. The Splunk HTTP collector '
'source URL can be provided using the '
'`SPLUNK_URL` environment variable or '
'in the ansible.cfg file.')
self._display.warning(
"Splunk HTTP collector source URL was "
"not provided. The Splunk HTTP collector "
"source URL can be provided using the "
"`SPLUNK_URL` environment variable or "
"in the ansible.cfg file."
)
self.authtoken = self.get_option('authtoken')
self.authtoken = self.get_option("authtoken")
if self.authtoken is None:
self.disabled = True
self._display.warning('Splunk HTTP collector requires an authentication'
'token. The Splunk HTTP collector '
'authentication token can be provided using the '
'`SPLUNK_AUTHTOKEN` environment variable or '
'in the ansible.cfg file.')
self._display.warning(
"Splunk HTTP collector requires an authentication"
"token. The Splunk HTTP collector "
"authentication token can be provided using the "
"`SPLUNK_AUTHTOKEN` environment variable or "
"in the ansible.cfg file."
)
self.validate_certs = self.get_option('validate_certs')
self.validate_certs = self.get_option("validate_certs")
self.include_milliseconds = self.get_option('include_milliseconds')
self.include_milliseconds = self.get_option("include_milliseconds")
self.batch = self.get_option('batch')
self.batch = self.get_option("batch")
def v2_playbook_on_start(self, playbook):
self.splunk.ansible_playbook = basename(playbook._file_name)
@@ -231,9 +225,9 @@ class CallbackModule(CallbackBase):
self.validate_certs,
self.include_milliseconds,
self.batch,
'OK',
"OK",
result,
self._runtime(result)
self._runtime(result),
)
def v2_runner_on_skipped(self, result, **kwargs):
@@ -243,9 +237,9 @@ class CallbackModule(CallbackBase):
self.validate_certs,
self.include_milliseconds,
self.batch,
'SKIPPED',
"SKIPPED",
result,
self._runtime(result)
self._runtime(result),
)
def v2_runner_on_failed(self, result, **kwargs):
@@ -255,21 +249,21 @@ class CallbackModule(CallbackBase):
self.validate_certs,
self.include_milliseconds,
self.batch,
'FAILED',
"FAILED",
result,
self._runtime(result)
self._runtime(result),
)
def runner_on_async_failed(self, result, **kwargs):
def v2_runner_on_async_failed(self, result, **kwargs):
self.splunk.send_event(
self.url,
self.authtoken,
self.validate_certs,
self.include_milliseconds,
self.batch,
'FAILED',
"FAILED",
result,
self._runtime(result)
self._runtime(result),
)
def v2_runner_on_unreachable(self, result, **kwargs):
@@ -279,7 +273,7 @@ class CallbackModule(CallbackBase):
self.validate_certs,
self.include_milliseconds,
self.batch,
'UNREACHABLE',
"UNREACHABLE",
result,
self._runtime(result)
self._runtime(result),
)

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -41,11 +40,10 @@ examples: |-
url = https://endpoint1.collection.us2.sumologic.com/receiver/v1/http/R8moSv1d8EW9LAUFZJ6dbxCFxwLH6kfCdcBfddlfxCbLuL-BN5twcTpMk__pYy_cDmp==
"""
import json
import uuid
import socket
import getpass
import json
import socket
import uuid
from os.path import basename
from ansible.module_utils.ansible_release import __version__ as ansible_version
@@ -58,7 +56,7 @@ from ansible_collections.community.general.plugins.module_utils.datetime import
)
class SumologicHTTPCollectorSource(object):
class SumologicHTTPCollectorSource:
def __init__(self):
self.ansible_check_mode = False
self.ansible_playbook = ""
@@ -68,7 +66,7 @@ class SumologicHTTPCollectorSource(object):
self.user = getpass.getuser()
def send_event(self, url, state, result, runtime):
if result._task_fields['args'].get('_ansible_check_mode') is True:
if result._task_fields["args"].get("_ansible_check_mode") is True:
self.ansible_check_mode = True
if result._task._role:
@@ -76,67 +74,63 @@ class SumologicHTTPCollectorSource(object):
else:
ansible_role = None
if 'args' in result._task_fields:
del result._task_fields['args']
if "args" in result._task_fields:
del result._task_fields["args"]
data = {}
data['uuid'] = result._task._uuid
data['session'] = self.session
data['status'] = state
data['timestamp'] = now().strftime('%Y-%m-%d %H:%M:%S +0000')
data['host'] = self.host
data['ip_address'] = self.ip_address
data['user'] = self.user
data['runtime'] = runtime
data['ansible_version'] = ansible_version
data['ansible_check_mode'] = self.ansible_check_mode
data['ansible_host'] = result._host.name
data['ansible_playbook'] = self.ansible_playbook
data['ansible_role'] = ansible_role
data['ansible_task'] = result._task_fields
data['ansible_result'] = result._result
data["uuid"] = result._task._uuid
data["session"] = self.session
data["status"] = state
data["timestamp"] = now().strftime("%Y-%m-%d %H:%M:%S +0000")
data["host"] = self.host
data["ip_address"] = self.ip_address
data["user"] = self.user
data["runtime"] = runtime
data["ansible_version"] = ansible_version
data["ansible_check_mode"] = self.ansible_check_mode
data["ansible_host"] = result._host.name
data["ansible_playbook"] = self.ansible_playbook
data["ansible_role"] = ansible_role
data["ansible_task"] = result._task_fields
data["ansible_result"] = result._result
open_url(
url,
data=json.dumps(data, cls=AnsibleJSONEncoder, sort_keys=True),
headers={
'Content-type': 'application/json',
'X-Sumo-Host': data['ansible_host']
},
method='POST'
headers={"Content-type": "application/json", "X-Sumo-Host": data["ansible_host"]},
method="POST",
)
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.sumologic'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.sumologic"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self, display=None):
super(CallbackModule, self).__init__(display=display)
super().__init__(display=display)
self.start_datetimes = {} # Collect task start times
self.url = None
self.sumologic = SumologicHTTPCollectorSource()
def _runtime(self, result):
return (
now() -
self.start_datetimes[result._task._uuid]
).total_seconds()
return (now() - self.start_datetimes[result._task._uuid]).total_seconds()
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
self.url = self.get_option('url')
self.url = self.get_option("url")
if self.url is None:
self.disabled = True
self._display.warning('Sumologic HTTP collector source URL was '
'not provided. The Sumologic HTTP collector '
'source URL can be provided using the '
'`SUMOLOGIC_URL` environment variable or '
'in the ansible.cfg file.')
self._display.warning(
"Sumologic HTTP collector source URL was "
"not provided. The Sumologic HTTP collector "
"source URL can be provided using the "
"`SUMOLOGIC_URL` environment variable or "
"in the ansible.cfg file."
)
def v2_playbook_on_start(self, playbook):
self.sumologic.ansible_playbook = basename(playbook._file_name)
@@ -148,41 +142,16 @@ class CallbackModule(CallbackBase):
self.start_datetimes[task._uuid] = now()
def v2_runner_on_ok(self, result, **kwargs):
self.sumologic.send_event(
self.url,
'OK',
result,
self._runtime(result)
)
self.sumologic.send_event(self.url, "OK", result, self._runtime(result))
def v2_runner_on_skipped(self, result, **kwargs):
self.sumologic.send_event(
self.url,
'SKIPPED',
result,
self._runtime(result)
)
self.sumologic.send_event(self.url, "SKIPPED", result, self._runtime(result))
def v2_runner_on_failed(self, result, **kwargs):
self.sumologic.send_event(
self.url,
'FAILED',
result,
self._runtime(result)
)
self.sumologic.send_event(self.url, "FAILED", result, self._runtime(result))
def runner_on_async_failed(self, result, **kwargs):
self.sumologic.send_event(
self.url,
'FAILED',
result,
self._runtime(result)
)
self.sumologic.send_event(self.url, "FAILED", result, self._runtime(result))
def v2_runner_on_unreachable(self, result, **kwargs):
self.sumologic.send_event(
self.url,
'UNREACHABLE',
result,
self._runtime(result)
)
self.sumologic.send_event(self.url, "UNREACHABLE", result, self._runtime(result))

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -57,7 +56,6 @@ options:
import logging
import logging.handlers
import socket
from ansible.plugins.callback import CallbackBase
@@ -69,62 +67,89 @@ class CallbackModule(CallbackBase):
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'notification'
CALLBACK_NAME = 'community.general.syslog_json'
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "community.general.syslog_json"
CALLBACK_NEEDS_WHITELIST = True
def __init__(self):
super(CallbackModule, self).__init__()
super().__init__()
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
syslog_host = self.get_option("server")
syslog_port = int(self.get_option("port"))
syslog_facility = self.get_option("facility")
self.logger = logging.getLogger('ansible logger')
self.logger = logging.getLogger("ansible logger")
self.logger.setLevel(logging.DEBUG)
self.handler = logging.handlers.SysLogHandler(
address=(syslog_host, syslog_port),
facility=syslog_facility
)
self.handler = logging.handlers.SysLogHandler(address=(syslog_host, syslog_port), facility=syslog_facility)
self.logger.addHandler(self.handler)
self.hostname = socket.gethostname()
def v2_runner_on_failed(self, result, ignore_errors=False):
res = result._result
host = result._host.get_name()
self.logger.error('%s ansible-command: task execution FAILED; host: %s; message: %s', self.hostname, host, self._dump_results(res))
self.logger.error(
"%s ansible-command: task execution FAILED; host: %s; message: %s",
self.hostname,
host,
self._dump_results(res),
)
def v2_runner_on_ok(self, result):
res = result._result
host = result._host.get_name()
if result._task.action != "gather_facts" or self.get_option("setup"):
self.logger.info('%s ansible-command: task execution OK; host: %s; message: %s', self.hostname, host, self._dump_results(res))
self.logger.info(
"%s ansible-command: task execution OK; host: %s; message: %s",
self.hostname,
host,
self._dump_results(res),
)
def v2_runner_on_skipped(self, result):
host = result._host.get_name()
self.logger.info('%s ansible-command: task execution SKIPPED; host: %s; message: %s', self.hostname, host, 'skipped')
self.logger.info(
"%s ansible-command: task execution SKIPPED; host: %s; message: %s", self.hostname, host, "skipped"
)
def v2_runner_on_unreachable(self, result):
res = result._result
host = result._host.get_name()
self.logger.error('%s ansible-command: task execution UNREACHABLE; host: %s; message: %s', self.hostname, host, self._dump_results(res))
self.logger.error(
"%s ansible-command: task execution UNREACHABLE; host: %s; message: %s",
self.hostname,
host,
self._dump_results(res),
)
def v2_runner_on_async_failed(self, result):
res = result._result
host = result._host.get_name()
jid = result._result.get('ansible_job_id')
self.logger.error('%s ansible-command: task execution FAILED; host: %s; message: %s', self.hostname, host, self._dump_results(res))
# jid = result._result.get("ansible_job_id")
self.logger.error(
"%s ansible-command: task execution FAILED; host: %s; message: %s",
self.hostname,
host,
self._dump_results(res),
)
def v2_playbook_on_import_for_host(self, result, imported_file):
host = result._host.get_name()
self.logger.info('%s ansible-command: playbook IMPORTED; host: %s; message: imported file %s', self.hostname, host, imported_file)
self.logger.info(
"%s ansible-command: playbook IMPORTED; host: %s; message: imported file %s",
self.hostname,
host,
imported_file,
)
def v2_playbook_on_not_import_for_host(self, result, missing_file):
host = result._host.get_name()
self.logger.info('%s ansible-command: playbook NOT IMPORTED; host: %s; message: missing file %s', self.hostname, host, missing_file)
self.logger.info(
"%s ansible-command: playbook NOT IMPORTED; host: %s; message: missing file %s",
self.hostname,
host,
missing_file,
)

View File

@@ -1,5 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2025, Felix Fontein <felix@fontein.de>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
@@ -52,8 +50,8 @@ from ansible.plugins.callback.default import CallbackModule as Default
class CallbackModule(Default):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.tasks_only'
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.tasks_only"
def v2_playbook_on_play_start(self, play):
pass
@@ -62,7 +60,7 @@ class CallbackModule(Default):
pass
def set_options(self, *args, **kwargs):
result = super(CallbackModule, self).set_options(*args, **kwargs)
result = super().set_options(*args, **kwargs)
self.number_of_columns = self.get_option("number_of_columns")
if self.number_of_columns is not None:
self._display.columns = self.number_of_columns

View File

@@ -1,5 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2024, kurokobo <kurokobo@protonmail.com>
# Copyright (c) 2014, Michael DeHaan <michael.dehaan@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -7,7 +5,6 @@
from __future__ import annotations
DOCUMENTATION = r"""
name: timestamp
type: stdout
@@ -51,12 +48,13 @@ extends_documentation_fragment:
"""
import sys
import types
from datetime import datetime
from ansible.module_utils.common.text.converters import to_text
from ansible.plugins.callback.default import CallbackModule as Default
from ansible.utils.display import get_text_width
from ansible.module_utils.common.text.converters import to_text
from datetime import datetime
import types
import sys
# Store whether the zoneinfo module is available
_ZONEINFO_AVAILABLE = sys.version_info >= (3, 9)
@@ -91,7 +89,7 @@ def banner(self, msg, color=None, cows=True):
msg = msg.strip()
try:
star_len = self.columns - get_text_width(msg) - timestamp_len
except EnvironmentError:
except OSError:
star_len = self.columns - len(msg) - timestamp_len
if star_len <= 3:
star_len = 3
@@ -105,13 +103,13 @@ class CallbackModule(Default):
CALLBACK_NAME = "community.general.timestamp"
def __init__(self):
super(CallbackModule, self).__init__()
super().__init__()
# Replace the banner method of the display object with the custom one
self._display.banner = types.MethodType(banner, self._display)
def set_options(self, task_keys=None, var_options=None, direct=None):
super(CallbackModule, self).set_options(task_keys=task_keys, var_options=var_options, direct=direct)
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
# Store zoneinfo for specified timezone if available
tzinfo = None
@@ -121,5 +119,5 @@ class CallbackModule(Default):
tzinfo = ZoneInfo(self.get_option("timezone"))
# Inject options into the display object
setattr(self._display, "timestamp_tzinfo", tzinfo)
setattr(self._display, "timestamp_format_string", self.get_option("format_string"))
self._display.timestamp_tzinfo = tzinfo
self._display.timestamp_format_string = self.get_option("format_string")

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2023, Al Bowles <@akatch>
# Copyright (c) 2012-2014, Michael DeHaan <michael.dehaan@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -21,16 +20,16 @@ requirements:
"""
from os.path import basename
from ansible import constants as C
from ansible import context
from ansible.module_utils.common.text.converters import to_text
from ansible.utils.color import colorize, hostcolor
from ansible.plugins.callback.default import CallbackModule as CallbackModule_default
from ansible.utils.color import colorize, hostcolor
class CallbackModule(CallbackModule_default):
'''
"""
Design goals:
- Print consolidated output that looks like a *NIX startup log
- Defaults should avoid displaying unnecessary information wherever possible
@@ -40,14 +39,16 @@ class CallbackModule(CallbackModule_default):
- Add option to display all hostnames on a single line in the appropriate result color (failures may have a separate line)
- Consolidate stats display
- Don't show play name if no hosts found
'''
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.unixy'
CALLBACK_TYPE = "stdout"
CALLBACK_NAME = "community.general.unixy"
def _run_is_verbose(self, result):
return ((self._display.verbosity > 0 or '_ansible_verbose_always' in result._result) and '_ansible_verbose_override' not in result._result)
return (
self._display.verbosity > 0 or "_ansible_verbose_always" in result._result
) and "_ansible_verbose_override" not in result._result
def _get_task_display_name(self, task):
self.task_display_name = None
@@ -60,8 +61,8 @@ class CallbackModule(CallbackModule_default):
self.task_display_name = task_display_name
def _preprocess_result(self, result):
self.delegated_vars = result._result.get('_ansible_delegated_vars', None)
self._handle_exception(result._result, use_stderr=self.get_option('display_failed_stderr'))
self.delegated_vars = result._result.get("_ansible_delegated_vars", None)
self._handle_exception(result._result, use_stderr=self.get_option("display_failed_stderr"))
self._handle_warnings(result._result)
def _process_result_output(self, result, msg):
@@ -73,16 +74,16 @@ class CallbackModule(CallbackModule_default):
return task_result
if self.delegated_vars:
task_delegate_host = self.delegated_vars['ansible_host']
task_delegate_host = self.delegated_vars["ansible_host"]
task_result = f"{task_host} -> {task_delegate_host} {msg}"
if result._result.get('msg') and result._result.get('msg') != "All items completed":
if result._result.get("msg") and result._result.get("msg") != "All items completed":
task_result += f" | msg: {to_text(result._result.get('msg'))}"
if result._result.get('stdout'):
if result._result.get("stdout"):
task_result += f" | stdout: {result._result.get('stdout')}"
if result._result.get('stderr'):
if result._result.get("stderr"):
task_result += f" | stderr: {result._result.get('stderr')}"
return task_result
@@ -90,7 +91,7 @@ class CallbackModule(CallbackModule_default):
def v2_playbook_on_task_start(self, task, is_conditional):
self._get_task_display_name(task)
if self.task_display_name is not None:
if task.check_mode and self.get_option('check_mode_markers'):
if task.check_mode and self.get_option("check_mode_markers"):
self._display.display(f"{self.task_display_name} (check mode)...")
else:
self._display.display(f"{self.task_display_name}...")
@@ -98,14 +99,14 @@ class CallbackModule(CallbackModule_default):
def v2_playbook_on_handler_task_start(self, task):
self._get_task_display_name(task)
if self.task_display_name is not None:
if task.check_mode and self.get_option('check_mode_markers'):
if task.check_mode and self.get_option("check_mode_markers"):
self._display.display(f"{self.task_display_name} (via handler in check mode)... ")
else:
self._display.display(f"{self.task_display_name} (via handler)... ")
def v2_playbook_on_play_start(self, play):
name = play.get_name().strip()
if play.check_mode and self.get_option('check_mode_markers'):
if play.check_mode and self.get_option("check_mode_markers"):
if name and play.hosts:
msg = f"\n- {name} (in check mode) on hosts: {','.join(play.hosts)} -"
else:
@@ -119,7 +120,7 @@ class CallbackModule(CallbackModule_default):
self._display.display(msg)
def v2_runner_on_skipped(self, result, ignore_errors=False):
if self.get_option('display_skipped_hosts'):
if self.get_option("display_skipped_hosts"):
self._preprocess_result(result)
display_color = C.COLOR_SKIP
msg = "skipped"
@@ -138,12 +139,12 @@ class CallbackModule(CallbackModule_default):
msg += f" | item: {item_value}"
task_result = self._process_result_output(result, msg)
self._display.display(f" {task_result}", display_color, stderr=self.get_option('display_failed_stderr'))
self._display.display(f" {task_result}", display_color, stderr=self.get_option("display_failed_stderr"))
def v2_runner_on_ok(self, result, msg="ok", display_color=C.COLOR_OK):
self._preprocess_result(result)
result_was_changed = ('changed' in result._result and result._result['changed'])
result_was_changed = "changed" in result._result and result._result["changed"]
if result_was_changed:
msg = "done"
item_value = self._get_item_label(result._result)
@@ -152,7 +153,7 @@ class CallbackModule(CallbackModule_default):
display_color = C.COLOR_CHANGED
task_result = self._process_result_output(result, msg)
self._display.display(f" {task_result}", display_color)
elif self.get_option('display_ok_hosts'):
elif self.get_option("display_ok_hosts"):
task_result = self._process_result_output(result, msg)
self._display.display(f" {task_result}", display_color)
@@ -172,17 +173,17 @@ class CallbackModule(CallbackModule_default):
display_color = C.COLOR_UNREACHABLE
task_result = self._process_result_output(result, msg)
self._display.display(f" {task_result}", display_color, stderr=self.get_option('display_failed_stderr'))
self._display.display(f" {task_result}", display_color, stderr=self.get_option("display_failed_stderr"))
def v2_on_file_diff(self, result):
if result._task.loop and 'results' in result._result:
for res in result._result['results']:
if 'diff' in res and res['diff'] and res.get('changed', False):
diff = self._get_diff(res['diff'])
if result._task.loop and "results" in result._result:
for res in result._result["results"]:
if "diff" in res and res["diff"] and res.get("changed", False):
diff = self._get_diff(res["diff"])
if diff:
self._display.display(diff)
elif 'diff' in result._result and result._result['diff'] and result._result.get('changed', False):
diff = self._get_diff(result._result['diff'])
elif "diff" in result._result and result._result["diff"] and result._result.get("changed", False):
diff = self._get_diff(result._result["diff"])
if diff:
self._display.display(diff)
@@ -198,30 +199,30 @@ class CallbackModule(CallbackModule_default):
f" {hostcolor(h, t)} : {colorize('ok', t['ok'], C.COLOR_OK)} {colorize('changed', t['changed'], C.COLOR_CHANGED)} "
f"{colorize('unreachable', t['unreachable'], C.COLOR_UNREACHABLE)} {colorize('failed', t['failures'], C.COLOR_ERROR)} "
f"{colorize('rescued', t['rescued'], C.COLOR_OK)} {colorize('ignored', t['ignored'], C.COLOR_WARN)}",
screen_only=True
screen_only=True,
)
self._display.display(
f" {hostcolor(h, t, False)} : {colorize('ok', t['ok'], None)} {colorize('changed', t['changed'], None)} "
f"{colorize('unreachable', t['unreachable'], None)} {colorize('failed', t['failures'], None)} {colorize('rescued', t['rescued'], None)} "
f"{colorize('ignored', t['ignored'], None)}",
log_only=True
log_only=True,
)
if stats.custom and self.get_option('show_custom_stats'):
if stats.custom and self.get_option("show_custom_stats"):
self._display.banner("CUSTOM STATS: ")
# per host
# TODO: come up with 'pretty format'
for k in sorted(stats.custom.keys()):
if k == '_run':
if k == "_run":
continue
stat_val = self._dump_results(stats.custom[k], indent=1).replace('\n', '')
self._display.display(f'\t{k}: {stat_val}')
stat_val = self._dump_results(stats.custom[k], indent=1).replace("\n", "")
self._display.display(f"\t{k}: {stat_val}")
# print per run custom stats
if '_run' in stats.custom:
if "_run" in stats.custom:
self._display.display("", screen_only=True)
stat_val_run = self._dump_results(stats.custom['_run'], indent=1).replace('\n', '')
self._display.display(f'\tRUN: {stat_val_run}')
stat_val_run = self._dump_results(stats.custom["_run"], indent=1).replace("\n", "")
self._display.display(f"\tRUN: {stat_val_run}")
self._display.display("", screen_only=True)
def v2_playbook_on_no_hosts_matched(self):
@@ -231,21 +232,24 @@ class CallbackModule(CallbackModule_default):
self._display.display(" Ran out of hosts!", color=C.COLOR_ERROR)
def v2_playbook_on_start(self, playbook):
if context.CLIARGS['check'] and self.get_option('check_mode_markers'):
if context.CLIARGS["check"] and self.get_option("check_mode_markers"):
self._display.display(f"Executing playbook {basename(playbook._file_name)} in check mode")
else:
self._display.display(f"Executing playbook {basename(playbook._file_name)}")
# show CLI arguments
if self._display.verbosity > 3:
if context.CLIARGS.get('args'):
self._display.display(f"Positional arguments: {' '.join(context.CLIARGS['args'])}",
color=C.COLOR_VERBOSE, screen_only=True)
if context.CLIARGS.get("args"):
self._display.display(
f"Positional arguments: {' '.join(context.CLIARGS['args'])}",
color=C.COLOR_VERBOSE,
screen_only=True,
)
for argument in (a for a in context.CLIARGS if a != 'args'):
for argument in (a for a in context.CLIARGS if a != "args"):
val = context.CLIARGS[argument]
if val:
self._display.vvvv(f'{argument}: {val}')
self._display.vvvv(f"{argument}: {val}")
def v2_runner_retry(self, result):
msg = f" Retrying... ({result._result['attempts']} of {result._result['retries']})"

View File

@@ -1,195 +0,0 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Make coding more python3-ish
from __future__ import annotations
DOCUMENTATION = r"""
author: Unknown (!UNKNOWN)
name: yaml
type: stdout
short_description: YAML-ized Ansible screen output
deprecated:
removed_in: 12.0.0
why: Starting in ansible-core 2.13, the P(ansible.builtin.default#callback) callback has support for printing output in
YAML format.
alternative: Use O(ansible.builtin.default#callback:result_format=yaml).
description:
- Ansible output that can be quite a bit easier to read than the default JSON formatting.
extends_documentation_fragment:
- default_callback
requirements:
- set as stdout in configuration
seealso:
- plugin: ansible.builtin.default
plugin_type: callback
description: >-
There is a parameter O(ansible.builtin.default#callback:result_format) in P(ansible.builtin.default#callback) that allows
you to change the output format to YAML.
notes:
- With ansible-core 2.13 or newer, you can instead specify V(yaml) for the parameter O(ansible.builtin.default#callback:result_format)
in P(ansible.builtin.default#callback).
"""
import yaml
import json
import re
import string
from collections.abc import Mapping, Sequence
from ansible.module_utils.common.text.converters import to_text
from ansible.plugins.callback import strip_internal_keys, module_response_deepcopy
from ansible.plugins.callback.default import CallbackModule as Default
# from http://stackoverflow.com/a/15423007/115478
def should_use_block(value):
"""Returns true if string should be in block format"""
for c in "\u000a\u000d\u001c\u001d\u001e\u0085\u2028\u2029":
if c in value:
return True
return False
def adjust_str_value_for_block(value):
# we care more about readable than accuracy, so...
# ...no trailing space
value = value.rstrip()
# ...and non-printable characters
value = ''.join(x for x in value if x in string.printable or ord(x) >= 0xA0)
# ...tabs prevent blocks from expanding
value = value.expandtabs()
# ...and odd bits of whitespace
value = re.sub(r'[\x0b\x0c\r]', '', value)
# ...as does trailing space
value = re.sub(r' +\n', '\n', value)
return value
def create_string_node(tag, value, style, default_style):
if style is None:
if should_use_block(value):
style = '|'
value = adjust_str_value_for_block(value)
else:
style = default_style
return yaml.representer.ScalarNode(tag, value, style=style)
try:
from ansible.module_utils.common.yaml import HAS_LIBYAML
# import below was added in https://github.com/ansible/ansible/pull/85039,
# first contained in ansible-core 2.19.0b2:
from ansible.utils.vars import transform_to_native_types
if HAS_LIBYAML:
from yaml.cyaml import CSafeDumper as SafeDumper
else:
from yaml import SafeDumper
class MyDumper(SafeDumper):
def represent_scalar(self, tag, value, style=None):
"""Uses block style for multi-line strings"""
node = create_string_node(tag, value, style, self.default_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
except ImportError:
# In case transform_to_native_types cannot be imported, we either have ansible-core 2.19.0b1
# (or some random commit from the devel or stable-2.19 branch after merging the DT changes
# and before transform_to_native_types was added), or we have a version without the DT changes.
# Here we simply assume we have a version without the DT changes, and thus can continue as
# with ansible-core 2.18 and before.
transform_to_native_types = None
from ansible.parsing.yaml.dumper import AnsibleDumper
class MyDumper(AnsibleDumper): # pylint: disable=inherit-non-class
def represent_scalar(self, tag, value, style=None):
"""Uses block style for multi-line strings"""
node = create_string_node(tag, value, style, self.default_style)
if self.alias_key is not None:
self.represented_objects[self.alias_key] = node
return node
def transform_recursively(value, transform):
# Since 2.19.0b7, this should no longer be needed:
# https://github.com/ansible/ansible/issues/85325
# https://github.com/ansible/ansible/pull/85389
if isinstance(value, Mapping):
return {transform(k): transform(v) for k, v in value.items()}
if isinstance(value, Sequence) and not isinstance(value, (str, bytes)):
return [transform(e) for e in value]
return transform(value)
class CallbackModule(Default):
"""
Variation of the Default output which uses nicely readable YAML instead
of JSON for printing results.
"""
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = 'stdout'
CALLBACK_NAME = 'community.general.yaml'
def __init__(self):
super(CallbackModule, self).__init__()
def _dump_results(self, result, indent=None, sort_keys=True, keep_invocation=False):
if result.get('_ansible_no_log', False):
return json.dumps(dict(censored="The output has been hidden due to the fact that 'no_log: true' was specified for this result"))
# All result keys stating with _ansible_ are internal, so remove them from the result before we output anything.
abridged_result = strip_internal_keys(module_response_deepcopy(result))
# remove invocation unless specifically wanting it
if not keep_invocation and self._display.verbosity < 3 and 'invocation' in result:
del abridged_result['invocation']
# remove diff information from screen output
if self._display.verbosity < 3 and 'diff' in result:
del abridged_result['diff']
# remove exception from screen output
if 'exception' in abridged_result:
del abridged_result['exception']
dumped = ''
# put changed and skipped into a header line
if 'changed' in abridged_result:
dumped += f"changed={str(abridged_result['changed']).lower()} "
del abridged_result['changed']
if 'skipped' in abridged_result:
dumped += f"skipped={str(abridged_result['skipped']).lower()} "
del abridged_result['skipped']
# if we already have stdout, we don't need stdout_lines
if 'stdout' in abridged_result and 'stdout_lines' in abridged_result:
abridged_result['stdout_lines'] = '<omitted>'
# if we already have stderr, we don't need stderr_lines
if 'stderr' in abridged_result and 'stderr_lines' in abridged_result:
abridged_result['stderr_lines'] = '<omitted>'
if abridged_result:
dumped += '\n'
if transform_to_native_types is not None:
abridged_result = transform_recursively(abridged_result, lambda v: transform_to_native_types(v, redact=False))
dumped += to_text(yaml.dump(abridged_result, allow_unicode=True, width=1000, Dumper=MyDumper, default_flow_style=False))
# indent by a couple of spaces
dumped = '\n '.join(dumped.split('\n')).rstrip()
return dumped
def _serialize_diff(self, diff):
return to_text(yaml.dump(diff, allow_unicode=True, width=1000, Dumper=AnsibleDumper, default_flow_style=False))

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Based on local.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
#
# (c) 2013, Maykel Moya <mmoya@speedyrails.com>
@@ -81,26 +80,26 @@ from ansible.errors import AnsibleError
from ansible.module_utils.basic import is_executable
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.converters import to_bytes
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.plugins.connection import BUFSIZE, ConnectionBase
from ansible.utils.display import Display
display = Display()
class Connection(ConnectionBase):
""" Local chroot based connections """
"""Local chroot based connections"""
transport = 'community.general.chroot'
transport = "community.general.chroot"
has_pipelining = True
# su currently has an undiagnosed issue with calculating the file
# checksums (so copy, for instance, doesn't work right)
# Have to look into that before re-enabling this
has_tty = False
default_user = 'root'
default_user = "root"
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
super().__init__(play_context, new_stdin, *args, **kwargs)
self.chroot = self._play_context.remote_addr
@@ -108,7 +107,7 @@ class Connection(ConnectionBase):
if not os.path.isdir(self.chroot):
raise AnsibleError(f"{self.chroot} is not a directory")
chrootsh = os.path.join(self.chroot, 'bin/sh')
chrootsh = os.path.join(self.chroot, "bin/sh")
# Want to check for a usable bourne shell inside the chroot.
# is_executable() == True is sufficient. For symlinks it
# gets really complicated really fast. So we punt on finding that
@@ -117,46 +116,46 @@ class Connection(ConnectionBase):
raise AnsibleError(f"{self.chroot} does not look like a chrootable dir (/bin/sh missing)")
def _connect(self):
""" connect to the chroot """
if not self.get_option('disable_root_check') and os.geteuid() != 0:
"""connect to the chroot"""
if not self.get_option("disable_root_check") and os.geteuid() != 0:
raise AnsibleError(
"chroot connection requires running as root. "
"You can override this check with the `disable_root_check` option.")
"You can override this check with the `disable_root_check` option."
)
if os.path.isabs(self.get_option('chroot_exe')):
self.chroot_cmd = self.get_option('chroot_exe')
if os.path.isabs(self.get_option("chroot_exe")):
self.chroot_cmd = self.get_option("chroot_exe")
else:
try:
self.chroot_cmd = get_bin_path(self.get_option('chroot_exe'))
self.chroot_cmd = get_bin_path(self.get_option("chroot_exe"))
except ValueError as e:
raise AnsibleError(str(e))
raise AnsibleError(str(e)) from e
super(Connection, self)._connect()
super()._connect()
if not self._connected:
display.vvv("THIS IS A LOCAL CHROOT DIR", host=self.chroot)
self._connected = True
def _buffered_exec_command(self, cmd, stdin=subprocess.PIPE):
""" run a command on the chroot. This is only needed for implementing
"""run a command on the chroot. This is only needed for implementing
put_file() get_file() so that we don't have to read the whole file
into memory.
compared to exec_command() it looses some niceties like being able to
return the process's exit code immediately.
"""
executable = self.get_option('executable')
local_cmd = [self.chroot_cmd, self.chroot, executable, '-c', cmd]
executable = self.get_option("executable")
local_cmd = [self.chroot_cmd, self.chroot, executable, "-c", cmd]
display.vvv(f"EXEC {local_cmd}", host=self.chroot)
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
p = subprocess.Popen(local_cmd, shell=False, stdin=stdin,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
p = subprocess.Popen(local_cmd, shell=False, stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return p
def exec_command(self, cmd, in_data=None, sudoable=False):
""" run a command on the chroot """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
"""run a command on the chroot"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
p = self._buffered_exec_command(cmd)
@@ -165,70 +164,70 @@ class Connection(ConnectionBase):
@staticmethod
def _prefix_login_path(remote_path):
""" Make sure that we put files into a standard path
"""Make sure that we put files into a standard path
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
exist in any given chroot. So for now we're choosing "/" instead.
This also happens to be the former default.
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
exist in any given chroot. So for now we're choosing "/" instead.
This also happens to be the former default.
Can revisit using $HOME instead if it is a problem
Can revisit using $HOME instead if it is a problem
"""
if not remote_path.startswith(os.path.sep):
remote_path = os.path.join(os.path.sep, remote_path)
return os.path.normpath(remote_path)
def put_file(self, in_path, out_path):
""" transfer a file from local to chroot """
super(Connection, self).put_file(in_path, out_path)
"""transfer a file from local to chroot"""
super().put_file(in_path, out_path)
display.vvv(f"PUT {in_path} TO {out_path}", host=self.chroot)
out_path = shlex_quote(self._prefix_login_path(out_path))
try:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as in_file:
with open(to_bytes(in_path, errors="surrogate_or_strict"), "rb") as in_file:
if not os.fstat(in_file.fileno()).st_size:
count = ' count=0'
count = " count=0"
else:
count = ''
count = ""
try:
p = self._buffered_exec_command(f'dd of={out_path} bs={BUFSIZE}{count}', stdin=in_file)
except OSError:
raise AnsibleError("chroot connection requires dd command in the chroot")
p = self._buffered_exec_command(f"dd of={out_path} bs={BUFSIZE}{count}", stdin=in_file)
except OSError as e:
raise AnsibleError("chroot connection requires dd command in the chroot") from e
try:
stdout, stderr = p.communicate()
except Exception:
except Exception as e:
traceback.print_exc()
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}")
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}") from e
if p.returncode != 0:
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}:\n{stdout}\n{stderr}")
except IOError:
raise AnsibleError(f"file or module does not exist at: {in_path}")
except OSError as e:
raise AnsibleError(f"file or module does not exist at: {in_path}") from e
def fetch_file(self, in_path, out_path):
""" fetch a file from chroot to local """
super(Connection, self).fetch_file(in_path, out_path)
"""fetch a file from chroot to local"""
super().fetch_file(in_path, out_path)
display.vvv(f"FETCH {in_path} TO {out_path}", host=self.chroot)
in_path = shlex_quote(self._prefix_login_path(in_path))
try:
p = self._buffered_exec_command(f'dd if={in_path} bs={BUFSIZE}')
except OSError:
raise AnsibleError("chroot connection requires dd command in the chroot")
p = self._buffered_exec_command(f"dd if={in_path} bs={BUFSIZE}")
except OSError as e:
raise AnsibleError("chroot connection requires dd command in the chroot") from e
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
with open(to_bytes(out_path, errors="surrogate_or_strict"), "wb+") as out_file:
try:
chunk = p.stdout.read(BUFSIZE)
while chunk:
out_file.write(chunk)
chunk = p.stdout.read(BUFSIZE)
except Exception:
except Exception as e:
traceback.print_exc()
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}")
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}") from e
stdout, stderr = p.communicate()
if p.returncode != 0:
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}:\n{stdout}\n{stderr}")
def close(self):
""" terminate the connection; nothing to do here """
super(Connection, self).close()
"""terminate the connection; nothing to do here"""
super().close()
self._connected = False

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Based on local.py (c) 2012, Michael DeHaan <michael.dehaan@gmail.com>
# Based on chroot.py (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# Copyright (c) 2013, Michael Scherer <misc@zarb.org>
@@ -30,13 +29,14 @@ options:
HAVE_FUNC = False
try:
import func.overlord.client as fc
HAVE_FUNC = True
except ImportError:
pass
import os
import tempfile
import shutil
import tempfile
from ansible.errors import AnsibleError
from ansible.plugins.connection import ConnectionBase
@@ -46,7 +46,7 @@ display = Display()
class Connection(ConnectionBase):
""" Func-based connections """
"""Func-based connections"""
has_pipelining = False
@@ -65,7 +65,7 @@ class Connection(ConnectionBase):
return self
def exec_command(self, cmd, in_data=None, sudoable=True):
""" run a command on the remote minion """
"""run a command on the remote minion"""
if in_data:
raise AnsibleError("Internal Error: this module does not support optimized module pipelining")
@@ -83,16 +83,16 @@ class Connection(ConnectionBase):
return os.path.join(prefix, normpath[1:])
def put_file(self, in_path, out_path):
""" transfer a file from local to remote """
"""transfer a file from local to remote"""
out_path = self._normalize_path(out_path, '/')
out_path = self._normalize_path(out_path, "/")
display.vvv(f"PUT {in_path} TO {out_path}", host=self.host)
self.client.local.copyfile.send(in_path, out_path)
def fetch_file(self, in_path, out_path):
""" fetch a file from remote to local """
"""fetch a file from remote to local"""
in_path = self._normalize_path(in_path, '/')
in_path = self._normalize_path(in_path, "/")
display.vvv(f"FETCH {in_path} TO {out_path}", host=self.host)
# need to use a tmp dir due to difference of semantic for getfile
# ( who take a # directory as destination) and fetch_file, who
@@ -103,5 +103,5 @@ class Connection(ConnectionBase):
shutil.rmtree(tmpdir)
def close(self):
""" terminate the connection; nothing to do here """
"""terminate the connection; nothing to do here"""
pass

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Based on lxd.py (c) 2016, Matt Clay <matt@mystile.com>
# (c) 2023, Stephane Graber <stgraber@stgraber.org>
# Copyright (c) 2023 Ansible Project
@@ -14,6 +13,9 @@ short_description: Run tasks in Incus instances using the Incus CLI
description:
- Run commands or put/fetch files to an existing Incus instance using Incus CLI.
version_added: "8.2.0"
notes:
- When using this collection for Windows virtual machines, set C(ansible_shell_type) to C(powershell) or C(cmd) as a variable to the host in
the inventory.
options:
remote_addr:
description:
@@ -76,78 +78,127 @@ options:
"""
import os
from subprocess import call, Popen, PIPE
import re
from subprocess import PIPE, Popen, call
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
from ansible.errors import AnsibleConnectionFailure, AnsibleError, AnsibleFileNotFound
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.plugins.connection import ConnectionBase
class Connection(ConnectionBase):
""" Incus based connections """
"""Incus based connections"""
transport = "incus"
has_pipelining = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
super().__init__(play_context, new_stdin, *args, **kwargs)
self._incus_cmd = get_bin_path("incus")
if not self._incus_cmd:
raise AnsibleError("incus command not found in PATH")
if getattr(self._shell, "_IS_WINDOWS", False):
# Initializing regular expression patterns to match on a PowerShell or cmd command line.
self.powershell_regex_pattern = re.compile(
r'^"?(?P<executable>(?:[a-z]:\\)?[a-z0-9 ()\\.]*powershell(?:\.exe)?)"?\s+(?P<args>.*)(?P<command>-c(?:ommand)?)\s+(?P<post_args>.*(\n.*)*)',
re.IGNORECASE,
)
self.cmd_regex_pattern = re.compile(
r'^"?(?P<executable>(?:[a-z]:\\)?[a-z0-9 ()\\.]*cmd(?:\.exe)?)"?\s+(?P<args>.*)(?P<command>/c)\s+(?P<post_args>.*)',
re.IGNORECASE,
)
# Basic setup for a Windows host.
self.has_native_async = True
self.always_pipeline_modules = True
self.module_implementation_preferences = (".ps1", ".exe", "")
self.allow_executable = False
def _connect(self):
"""connect to Incus (nothing to do here) """
super(Connection, self)._connect()
"""connect to Incus (nothing to do here)"""
super()._connect()
if not self._connected:
self._display.vvv(f"ESTABLISH Incus CONNECTION FOR USER: {self.get_option('remote_user')}",
host=self._instance())
self._display.vvv(
f"ESTABLISH Incus CONNECTION FOR USER: {self.get_option('remote_user')}", host=self._instance()
)
self._connected = True
def _build_command(self, cmd) -> str:
def _build_command(self, cmd) -> list[str]:
"""build the command to execute on the incus host"""
exec_cmd = [
# Force pseudo-terminal allocation if the active become plugin
# requires one (e.g. community.general.machinectl), otherwise the
# become helper runs without a controlling tty and silently fails.
require_tty = self.become is not None and getattr(self.become, "require_tty", False)
exec_cmd: list[str] = [
self._incus_cmd,
"--project", self.get_option("project"),
"--project",
self.get_option("project"),
"exec",
*(["-T"] if getattr(self._shell, "_IS_WINDOWS", False) else []),
*(["-t"] if require_tty and not getattr(self._shell, "_IS_WINDOWS", False) else []),
f"{self.get_option('remote')}:{self._instance()}",
"--"]
"--",
]
if self.get_option("remote_user") != "root":
self._display.vvv(
f"INFO: Running as non-root user: {self.get_option('remote_user')}, \
trying to run 'incus exec' with become method: {self.get_option('incus_become_method')}",
host=self._instance(),
)
exec_cmd.extend(
[self.get_option("incus_become_method"), self.get_option("remote_user"), "-c"]
)
if getattr(self._shell, "_IS_WINDOWS", False):
if regex_match := self.powershell_regex_pattern.match(cmd):
regex_pattern = self.powershell_regex_pattern
elif regex_match := self.cmd_regex_pattern.match(cmd):
regex_pattern = self.cmd_regex_pattern
exec_cmd.extend([self.get_option("executable"), "-c", cmd])
if regex_match:
self._display.vvvvvv(
f'Found keyword: "{regex_match.group("command")}" based on regex: {regex_pattern.pattern}',
host=self._instance(),
)
# To avoid splitting on a space contained in the path, set the executable as the first argument.
exec_cmd.append(regex_match.group("executable"))
if args := regex_match.group("args"):
exec_cmd.extend(args.strip().split(" "))
# Set the command argument depending on cmd or powershell and the rest of it
exec_cmd.append(regex_match.group("command"))
if post_args := regex_match.group("post_args"):
exec_cmd.append(post_args.strip())
else:
# For anything else using -EncodedCommand or else, just split on space.
exec_cmd.extend(cmd.split(" "))
else:
if self.get_option("remote_user") != "root":
self._display.vvv(
f"INFO: Running as non-root user: {self.get_option('remote_user')}, \
trying to run 'incus exec' with become method: {self.get_option('incus_become_method')}",
host=self._instance(),
)
exec_cmd.extend([self.get_option("incus_become_method"), self.get_option("remote_user"), "-c"])
exec_cmd.extend([self.get_option("executable"), "-c", cmd])
return exec_cmd
def _instance(self):
# Return only the leading part of the FQDN as the instance name
# as Incus instance names cannot be a FQDN.
return self.get_option('remote_addr').split(".")[0]
return self.get_option("remote_addr").split(".")[0]
def exec_command(self, cmd, in_data=None, sudoable=True):
""" execute a command on the Incus host """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
"""execute a command on the Incus host"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
self._display.vvv(f"EXEC {cmd}",
host=self._instance())
self._display.vvv(f"EXEC {cmd}", host=self._instance())
local_cmd = self._build_command(cmd)
self._display.vvvvv(f"EXEC {local_cmd}", host=self._instance())
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
in_data = to_bytes(in_data, errors='surrogate_or_strict', nonstring='passthru')
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
in_data = to_bytes(in_data, errors="surrogate_or_strict", nonstring="passthru")
process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate(in_data)
@@ -155,32 +206,22 @@ class Connection(ConnectionBase):
stdout = to_text(stdout)
stderr = to_text(stderr)
if stderr.startswith("Error: ") and stderr.rstrip().endswith(
": Instance is not running"
):
if stderr.startswith("Error: ") and stderr.rstrip().endswith(": Instance is not running"):
raise AnsibleConnectionFailure(
f"instance not running: {self._instance()} (remote={self.get_option('remote')}, project={self.get_option('project')})"
)
if stderr.startswith("Error: ") and stderr.rstrip().endswith(
": Instance not found"
):
if stderr.startswith("Error: ") and stderr.rstrip().endswith(": Instance not found"):
raise AnsibleConnectionFailure(
f"instance not found: {self._instance()} (remote={self.get_option('remote')}, project={self.get_option('project')})"
)
if (
stderr.startswith("Error: ")
and ": User does not have permission " in stderr
):
if stderr.startswith("Error: ") and ": User does not have permission " in stderr:
raise AnsibleConnectionFailure(
f"instance access denied: {self._instance()} (remote={self.get_option('remote')}, project={self.get_option('project')})"
)
if (
stderr.startswith("Error: ")
and ": User does not have entitlement " in stderr
):
if stderr.startswith("Error: ") and ": User does not have entitlement " in stderr:
raise AnsibleConnectionFailure(
f"instance access denied: {self._instance()} (remote={self.get_option('remote')}, project={self.get_option('project')})"
)
@@ -192,31 +233,26 @@ class Connection(ConnectionBase):
rc, uid_out, err = self.exec_command("/bin/id -u")
if rc != 0:
raise AnsibleError(
f"Failed to get remote uid for user {self.get_option('remote_user')}: {err}"
)
raise AnsibleError(f"Failed to get remote uid for user {self.get_option('remote_user')}: {err}")
uid = uid_out.strip()
rc, gid_out, err = self.exec_command("/bin/id -g")
if rc != 0:
raise AnsibleError(
f"Failed to get remote gid for user {self.get_option('remote_user')}: {err}"
)
raise AnsibleError(f"Failed to get remote gid for user {self.get_option('remote_user')}: {err}")
gid = gid_out.strip()
return int(uid), int(gid)
def put_file(self, in_path, out_path):
""" put a file from local to Incus """
super(Connection, self).put_file(in_path, out_path)
"""put a file from local to Incus"""
super().put_file(in_path, out_path)
self._display.vvv(f"PUT {in_path} TO {out_path}",
host=self._instance())
self._display.vvv(f"PUT {in_path} TO {out_path}", host=self._instance())
if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')):
if not os.path.isfile(to_bytes(in_path, errors="surrogate_or_strict")):
raise AnsibleFileNotFound(f"input path is not a file: {in_path}")
if self.get_option("remote_user") != "root":
if not getattr(self._shell, "_IS_WINDOWS", False) and self.get_option("remote_user") != "root":
uid, gid = self._get_remote_uid_gid()
local_cmd = [
self._incus_cmd,
@@ -246,30 +282,33 @@ class Connection(ConnectionBase):
self._display.vvvvv(f"PUT {local_cmd}", host=self._instance())
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
call(local_cmd)
def fetch_file(self, in_path, out_path):
""" fetch a file from Incus to local """
super(Connection, self).fetch_file(in_path, out_path)
"""fetch a file from Incus to local"""
super().fetch_file(in_path, out_path)
self._display.vvv(f"FETCH {in_path} TO {out_path}",
host=self._instance())
self._display.vvv(f"FETCH {in_path} TO {out_path}", host=self._instance())
local_cmd = [
self._incus_cmd,
"--project", self.get_option("project"),
"file", "pull", "--quiet",
"--project",
self.get_option("project"),
"file",
"pull",
"--quiet",
f"{self.get_option('remote')}:{self._instance()}/{in_path}",
out_path]
out_path,
]
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
call(local_cmd)
def close(self):
""" close the connection (nothing to do here) """
super(Connection, self).close()
"""close the connection (nothing to do here)"""
super().close()
self._connected = False

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Based on jail.py
# (c) 2013, Michael Scherer <misc@zarb.org>
# (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
@@ -34,40 +33,43 @@ options:
import subprocess
from ansible_collections.community.general.plugins.connection.jail import Connection as Jail
from ansible.module_utils.common.text.converters import to_native
from ansible.errors import AnsibleError
from ansible.module_utils.common.text.converters import to_native
from ansible.utils.display import Display
from ansible_collections.community.general.plugins.connection.jail import Connection as Jail
display = Display()
class Connection(Jail):
""" Local iocage based connections """
"""Local iocage based connections"""
transport = 'community.general.iocage'
transport = "community.general.iocage"
def __init__(self, play_context, new_stdin, *args, **kwargs):
self.ioc_jail = play_context.remote_addr
self.iocage_cmd = Jail._search_executable('iocage')
self.iocage_cmd = Jail._search_executable("iocage")
jail_uuid = self.get_jail_uuid()
kwargs[Jail.modified_jailname_key] = f'ioc-{jail_uuid}'
kwargs[Jail.modified_jailname_key] = f"ioc-{jail_uuid}"
display.vvv(
f"Jail {self.ioc_jail} has been translated to {kwargs[Jail.modified_jailname_key]}",
host=kwargs[Jail.modified_jailname_key]
host=kwargs[Jail.modified_jailname_key],
)
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
super().__init__(play_context, new_stdin, *args, **kwargs)
def get_jail_uuid(self):
p = subprocess.Popen([self.iocage_cmd, 'get', 'host_hostuuid', self.ioc_jail],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
p = subprocess.Popen(
[self.iocage_cmd, "get", "host_hostuuid", self.ioc_jail],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT,
)
stdout, stderr = p.communicate()
@@ -83,4 +85,4 @@ class Connection(Jail):
if p.returncode != 0:
raise AnsibleError(f"iocage returned an error: {stdout}")
return stdout.strip('\n')
return stdout.strip("\n")

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Based on local.py by Michael DeHaan <michael.dehaan@gmail.com>
# and chroot.py by Maykel Moya <mmoya@speedyrails.com>
# Copyright (c) 2013, Michael Scherer <misc@zarb.org>
@@ -43,25 +42,25 @@ from shlex import quote as shlex_quote
from ansible.errors import AnsibleError
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.converters import to_bytes, to_native, to_text
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.plugins.connection import BUFSIZE, ConnectionBase
from ansible.utils.display import Display
display = Display()
class Connection(ConnectionBase):
""" Local BSD Jail based connections """
"""Local BSD Jail based connections"""
modified_jailname_key = 'conn_jail_name'
modified_jailname_key = "conn_jail_name"
transport = 'community.general.jail'
transport = "community.general.jail"
# Pipelining may work. Someone needs to test by setting this to True and
# having pipelining=True in their ansible.cfg
has_pipelining = True
has_tty = False
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
super().__init__(play_context, new_stdin, *args, **kwargs)
self.jail = self._play_context.remote_addr
if self.modified_jailname_key in kwargs:
@@ -70,8 +69,8 @@ class Connection(ConnectionBase):
if os.geteuid() != 0:
raise AnsibleError("jail connection requires running as root")
self.jls_cmd = self._search_executable('jls')
self.jexec_cmd = self._search_executable('jexec')
self.jls_cmd = self._search_executable("jls")
self.jexec_cmd = self._search_executable("jexec")
if self.jail not in self.list_jails():
raise AnsibleError(f"incorrect jail name {self.jail}")
@@ -80,27 +79,27 @@ class Connection(ConnectionBase):
def _search_executable(executable):
try:
return get_bin_path(executable)
except ValueError:
raise AnsibleError(f"{executable} command not found in PATH")
except ValueError as e:
raise AnsibleError(f"{executable} command not found in PATH") from e
def list_jails(self):
p = subprocess.Popen([self.jls_cmd, '-q', 'name'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p = subprocess.Popen(
[self.jls_cmd, "-q", "name"], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
stdout, stderr = p.communicate()
return to_text(stdout, errors='surrogate_or_strict').split()
return to_text(stdout, errors="surrogate_or_strict").split()
def _connect(self):
""" connect to the jail; nothing to do here """
super(Connection, self)._connect()
"""connect to the jail; nothing to do here"""
super()._connect()
if not self._connected:
display.vvv(f"ESTABLISH JAIL CONNECTION FOR USER: {self._play_context.remote_user}", host=self.jail)
self._connected = True
def _buffered_exec_command(self, cmd, stdin=subprocess.PIPE):
""" run a command on the jail. This is only needed for implementing
"""run a command on the jail. This is only needed for implementing
put_file() get_file() so that we don't have to read the whole file
into memory.
@@ -109,25 +108,24 @@ class Connection(ConnectionBase):
"""
local_cmd = [self.jexec_cmd]
set_env = ''
set_env = ""
if self._play_context.remote_user is not None:
local_cmd += ['-U', self._play_context.remote_user]
local_cmd += ["-U", self._play_context.remote_user]
# update HOME since -U does not update the jail environment
set_env = f"HOME=~{self._play_context.remote_user} "
local_cmd += [self.jail, self._play_context.executable, '-c', set_env + cmd]
local_cmd += [self.jail, self._play_context.executable, "-c", set_env + cmd]
display.vvv(f"EXEC {local_cmd}", host=self.jail)
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
p = subprocess.Popen(local_cmd, shell=False, stdin=stdin,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
p = subprocess.Popen(local_cmd, shell=False, stdin=stdin, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
return p
def exec_command(self, cmd, in_data=None, sudoable=False):
""" run a command on the jail """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
"""run a command on the jail"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
p = self._buffered_exec_command(cmd)
@@ -136,70 +134,74 @@ class Connection(ConnectionBase):
@staticmethod
def _prefix_login_path(remote_path):
""" Make sure that we put files into a standard path
"""Make sure that we put files into a standard path
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
exist in any given chroot. So for now we're choosing "/" instead.
This also happens to be the former default.
If a path is relative, then we need to choose where to put it.
ssh chooses $HOME but we aren't guaranteed that a home dir will
exist in any given chroot. So for now we're choosing "/" instead.
This also happens to be the former default.
Can revisit using $HOME instead if it is a problem
Can revisit using $HOME instead if it is a problem
"""
if not remote_path.startswith(os.path.sep):
remote_path = os.path.join(os.path.sep, remote_path)
return os.path.normpath(remote_path)
def put_file(self, in_path, out_path):
""" transfer a file from local to jail """
super(Connection, self).put_file(in_path, out_path)
"""transfer a file from local to jail"""
super().put_file(in_path, out_path)
display.vvv(f"PUT {in_path} TO {out_path}", host=self.jail)
out_path = shlex_quote(self._prefix_login_path(out_path))
try:
with open(to_bytes(in_path, errors='surrogate_or_strict'), 'rb') as in_file:
with open(to_bytes(in_path, errors="surrogate_or_strict"), "rb") as in_file:
if not os.fstat(in_file.fileno()).st_size:
count = ' count=0'
count = " count=0"
else:
count = ''
count = ""
try:
p = self._buffered_exec_command(f'dd of={out_path} bs={BUFSIZE}{count}', stdin=in_file)
except OSError:
raise AnsibleError("jail connection requires dd command in the jail")
p = self._buffered_exec_command(f"dd of={out_path} bs={BUFSIZE}{count}", stdin=in_file)
except OSError as e:
raise AnsibleError("jail connection requires dd command in the jail") from e
try:
stdout, stderr = p.communicate()
except Exception:
except Exception as e:
traceback.print_exc()
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}")
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}") from e
if p.returncode != 0:
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}:\n{to_native(stdout)}\n{to_native(stderr)}")
except IOError:
raise AnsibleError(f"file or module does not exist at: {in_path}")
raise AnsibleError(
f"failed to transfer file {in_path} to {out_path}:\n{to_native(stdout)}\n{to_native(stderr)}"
)
except OSError as e:
raise AnsibleError(f"file or module does not exist at: {in_path}") from e
def fetch_file(self, in_path, out_path):
""" fetch a file from jail to local """
super(Connection, self).fetch_file(in_path, out_path)
"""fetch a file from jail to local"""
super().fetch_file(in_path, out_path)
display.vvv(f"FETCH {in_path} TO {out_path}", host=self.jail)
in_path = shlex_quote(self._prefix_login_path(in_path))
try:
p = self._buffered_exec_command(f'dd if={in_path} bs={BUFSIZE}')
except OSError:
raise AnsibleError("jail connection requires dd command in the jail")
p = self._buffered_exec_command(f"dd if={in_path} bs={BUFSIZE}")
except OSError as e:
raise AnsibleError("jail connection requires dd command in the jail") from e
with open(to_bytes(out_path, errors='surrogate_or_strict'), 'wb+') as out_file:
with open(to_bytes(out_path, errors="surrogate_or_strict"), "wb+") as out_file:
try:
chunk = p.stdout.read(BUFSIZE)
while chunk:
out_file.write(chunk)
chunk = p.stdout.read(BUFSIZE)
except Exception:
except Exception as e:
traceback.print_exc()
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}")
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}") from e
stdout, stderr = p.communicate()
if p.returncode != 0:
raise AnsibleError(f"failed to transfer file {in_path} to {out_path}:\n{to_native(stdout)}\n{to_native(stderr)}")
raise AnsibleError(
f"failed to transfer file {in_path} to {out_path}:\n{to_native(stdout)}\n{to_native(stderr)}"
)
def close(self):
""" terminate the connection; nothing to do here """
super(Connection, self).close()
"""terminate the connection; nothing to do here"""
super().close()
self._connected = False

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# (c) 2015, Joerg Thalheim <joerg@higgsboson.tk>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -32,16 +31,17 @@ options:
- name: ansible_lxc_executable
"""
import errno
import fcntl
import os
import select
import shutil
import traceback
import select
import fcntl
import errno
HAS_LIBLXC = False
try:
import lxc as _lxc
HAS_LIBLXC = True
except ImportError:
pass
@@ -52,27 +52,27 @@ from ansible.plugins.connection import ConnectionBase
class Connection(ConnectionBase):
""" Local lxc based connections """
"""Local lxc based connections"""
transport = 'community.general.lxc'
transport = "community.general.lxc"
has_pipelining = True
default_user = 'root'
default_user = "root"
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
super().__init__(play_context, new_stdin, *args, **kwargs)
self.container_name = None
self.container = None
def _connect(self):
""" connect to the lxc; nothing to do here """
super(Connection, self)._connect()
"""connect to the lxc; nothing to do here"""
super()._connect()
if not HAS_LIBLXC:
msg = "lxc python bindings are not installed"
raise errors.AnsibleError(msg)
container_name = self.get_option('remote_addr')
container_name = self.get_option("remote_addr")
if self.container and self.container_name == container_name:
return
@@ -94,12 +94,12 @@ class Connection(ConnectionBase):
while len(read_fds) > 0 or len(write_fds) > 0:
try:
ready_reads, ready_writes, dummy = select.select(read_fds, write_fds, [])
except select.error as e:
except OSError as e:
if e.args[0] == errno.EINTR:
continue
raise
for fd in ready_writes:
in_data = in_data[os.write(fd, in_data):]
in_data = in_data[os.write(fd, in_data) :]
if len(in_data) == 0:
write_fds.remove(fd)
for fd in ready_reads:
@@ -118,12 +118,12 @@ class Connection(ConnectionBase):
return fd
def exec_command(self, cmd, in_data=None, sudoable=False):
""" run a command on the chroot """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
"""run a command on the chroot"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
# python2-lxc needs bytes. python3-lxc needs text.
executable = to_native(self.get_option('executable'), errors='surrogate_or_strict')
local_cmd = [executable, '-c', to_native(cmd, errors='surrogate_or_strict')]
executable = to_native(self.get_option("executable"), errors="surrogate_or_strict")
local_cmd = [executable, "-c", to_native(cmd, errors="surrogate_or_strict")]
read_stdout, write_stdout = None, None
read_stderr, write_stderr = None, None
@@ -134,14 +134,14 @@ class Connection(ConnectionBase):
read_stderr, write_stderr = os.pipe()
kwargs = {
'stdout': self._set_nonblocking(write_stdout),
'stderr': self._set_nonblocking(write_stderr),
'env_policy': _lxc.LXC_ATTACH_CLEAR_ENV
"stdout": self._set_nonblocking(write_stdout),
"stderr": self._set_nonblocking(write_stderr),
"env_policy": _lxc.LXC_ATTACH_CLEAR_ENV,
}
if in_data:
read_stdin, write_stdin = os.pipe()
kwargs['stdin'] = self._set_nonblocking(read_stdin)
kwargs["stdin"] = self._set_nonblocking(read_stdin)
self._display.vvv(f"EXEC {local_cmd}", host=self.container_name)
pid = self.container.attach(_lxc.attach_run_command, local_cmd, **kwargs)
@@ -154,82 +154,77 @@ class Connection(ConnectionBase):
if read_stdin:
read_stdin = os.close(read_stdin)
return self._communicate(pid,
in_data,
write_stdin,
read_stdout,
read_stderr)
return self._communicate(pid, in_data, write_stdin, read_stdout, read_stderr)
finally:
fds = [read_stdout,
write_stdout,
read_stderr,
write_stderr,
read_stdin,
write_stdin]
fds = [read_stdout, write_stdout, read_stderr, write_stderr, read_stdin, write_stdin]
for fd in fds:
if fd:
os.close(fd)
def put_file(self, in_path, out_path):
''' transfer a file from local to lxc '''
super(Connection, self).put_file(in_path, out_path)
"""transfer a file from local to lxc"""
super().put_file(in_path, out_path)
self._display.vvv(f"PUT {in_path} TO {out_path}", host=self.container_name)
in_path = to_bytes(in_path, errors='surrogate_or_strict')
out_path = to_bytes(out_path, errors='surrogate_or_strict')
in_path = to_bytes(in_path, errors="surrogate_or_strict")
out_path = to_bytes(out_path, errors="surrogate_or_strict")
if not os.path.exists(in_path):
msg = f"file or module does not exist: {in_path}"
raise errors.AnsibleFileNotFound(msg)
try:
src_file = open(in_path, "rb")
except IOError:
except OSError as e:
traceback.print_exc()
raise errors.AnsibleError(f"failed to open input file to {in_path}")
raise errors.AnsibleError(f"failed to open input file to {in_path}") from e
try:
def write_file(args):
with open(out_path, 'wb+') as dst_file:
with open(out_path, "wb+") as dst_file:
shutil.copyfileobj(src_file, dst_file)
try:
self.container.attach_wait(write_file, None)
except IOError:
except OSError as e:
traceback.print_exc()
msg = f"failed to transfer file to {out_path}"
raise errors.AnsibleError(msg)
raise errors.AnsibleError(msg) from e
finally:
src_file.close()
def fetch_file(self, in_path, out_path):
''' fetch a file from lxc to local '''
super(Connection, self).fetch_file(in_path, out_path)
"""fetch a file from lxc to local"""
super().fetch_file(in_path, out_path)
self._display.vvv(f"FETCH {in_path} TO {out_path}", host=self.container_name)
in_path = to_bytes(in_path, errors='surrogate_or_strict')
out_path = to_bytes(out_path, errors='surrogate_or_strict')
in_path = to_bytes(in_path, errors="surrogate_or_strict")
out_path = to_bytes(out_path, errors="surrogate_or_strict")
try:
dst_file = open(out_path, "wb")
except IOError:
except OSError as e:
traceback.print_exc()
msg = f"failed to open output file {out_path}"
raise errors.AnsibleError(msg)
raise errors.AnsibleError(msg) from e
try:
def write_file(args):
try:
with open(in_path, 'rb') as src_file:
with open(in_path, "rb") as src_file:
shutil.copyfileobj(src_file, dst_file)
finally:
# this is needed in the lxc child process
# to flush internal python buffers
dst_file.close()
try:
self.container.attach_wait(write_file, None)
except IOError:
except OSError as e:
traceback.print_exc()
msg = f"failed to transfer file from {in_path} to {out_path}"
raise errors.AnsibleError(msg)
raise errors.AnsibleError(msg) from e
finally:
dst_file.close()
def close(self):
''' terminate the connection; nothing to do here '''
super(Connection, self).close()
"""terminate the connection; nothing to do here"""
super().close()
self._connected = False

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2016 Matt Clay <matt@mystile.com>
# Copyright (c) 2017 Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
@@ -75,44 +74,44 @@ options:
"""
import os
from subprocess import Popen, PIPE
from subprocess import PIPE, Popen
from ansible.errors import AnsibleError, AnsibleConnectionFailure, AnsibleFileNotFound
from ansible.errors import AnsibleConnectionFailure, AnsibleError, AnsibleFileNotFound
from ansible.module_utils.common.process import get_bin_path
from ansible.module_utils.common.text.converters import to_bytes, to_text
from ansible.plugins.connection import ConnectionBase
class Connection(ConnectionBase):
""" lxd based connections """
"""lxd based connections"""
transport = 'community.general.lxd'
transport = "community.general.lxd"
has_pipelining = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
super().__init__(play_context, new_stdin, *args, **kwargs)
try:
self._lxc_cmd = get_bin_path("lxc")
except ValueError:
raise AnsibleError("lxc command not found in PATH")
except ValueError as e:
raise AnsibleError("lxc command not found in PATH") from e
def _host(self):
""" translate remote_addr to lxd (short) hostname """
"""translate remote_addr to lxd (short) hostname"""
return self.get_option("remote_addr").split(".", 1)[0]
def _connect(self):
"""connect to lxd (nothing to do here) """
super(Connection, self)._connect()
"""connect to lxd (nothing to do here)"""
super()._connect()
if not self._connected:
self._display.vvv(f"ESTABLISH LXD CONNECTION FOR USER: {self.get_option('remote_user')}", host=self._host())
self._connected = True
def _build_command(self, cmd) -> str:
def _build_command(self, cmd) -> list[str]:
"""build the command to execute on the lxd host"""
exec_cmd = [self._lxc_cmd]
exec_cmd: list[str] = [self._lxc_cmd]
if self.get_option("project"):
exec_cmd.extend(["--project", self.get_option("project")])
@@ -125,25 +124,23 @@ class Connection(ConnectionBase):
trying to run 'lxc exec' with become method: {self.get_option('lxd_become_method')}",
host=self._host(),
)
exec_cmd.extend(
[self.get_option("lxd_become_method"), self.get_option("remote_user"), "-c"]
)
exec_cmd.extend([self.get_option("lxd_become_method"), self.get_option("remote_user"), "-c"])
exec_cmd.extend([self.get_option("executable"), "-c", cmd])
return exec_cmd
def exec_command(self, cmd, in_data=None, sudoable=True):
""" execute a command on the lxd host """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
"""execute a command on the lxd host"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
self._display.vvv(f"EXEC {cmd}", host=self._host())
local_cmd = self._build_command(cmd)
self._display.vvvvv(f"EXEC {local_cmd}", host=self._host())
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
in_data = to_bytes(in_data, errors='surrogate_or_strict', nonstring='passthru')
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
in_data = to_bytes(in_data, errors="surrogate_or_strict", nonstring="passthru")
process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
stdout, stderr = process.communicate(in_data)
@@ -166,27 +163,23 @@ class Connection(ConnectionBase):
rc, uid_out, err = self.exec_command("/bin/id -u")
if rc != 0:
raise AnsibleError(
f"Failed to get remote uid for user {self.get_option('remote_user')}: {err}"
)
raise AnsibleError(f"Failed to get remote uid for user {self.get_option('remote_user')}: {err}")
uid = uid_out.strip()
rc, gid_out, err = self.exec_command("/bin/id -g")
if rc != 0:
raise AnsibleError(
f"Failed to get remote gid for user {self.get_option('remote_user')}: {err}"
)
raise AnsibleError(f"Failed to get remote gid for user {self.get_option('remote_user')}: {err}")
gid = gid_out.strip()
return int(uid), int(gid)
def put_file(self, in_path, out_path):
""" put a file from local to lxd """
super(Connection, self).put_file(in_path, out_path)
"""put a file from local to lxd"""
super().put_file(in_path, out_path)
self._display.vvv(f"PUT {in_path} TO {out_path}", host=self._host())
if not os.path.isfile(to_bytes(in_path, errors='surrogate_or_strict')):
if not os.path.isfile(to_bytes(in_path, errors="surrogate_or_strict")):
raise AnsibleFileNotFound(f"input path is not a file: {in_path}")
local_cmd = [self._lxc_cmd]
@@ -219,33 +212,29 @@ class Connection(ConnectionBase):
self._display.vvvvv(f"PUT {local_cmd}", host=self._host())
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
process.communicate()
def fetch_file(self, in_path, out_path):
""" fetch a file from lxd to local """
super(Connection, self).fetch_file(in_path, out_path)
"""fetch a file from lxd to local"""
super().fetch_file(in_path, out_path)
self._display.vvv(f"FETCH {in_path} TO {out_path}", host=self._host())
local_cmd = [self._lxc_cmd]
if self.get_option("project"):
local_cmd.extend(["--project", self.get_option("project")])
local_cmd.extend([
"file", "pull",
f"{self.get_option('remote')}:{self._host()}/{in_path}",
out_path
])
local_cmd.extend(["file", "pull", f"{self.get_option('remote')}:{self._host()}/{in_path}", out_path])
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
process = Popen(local_cmd, stdin=PIPE, stdout=PIPE, stderr=PIPE)
process.communicate()
def close(self):
""" close the connection (nothing to do here) """
super(Connection, self).close()
"""close the connection (nothing to do here)"""
super().close()
self._connected = False

View File

@@ -1,4 +1,3 @@
# -*- coding: utf-8 -*-
# Based on the buildah connection plugin
# Copyright (c) 2017 Ansible Project
# 2018 Kushal Das
@@ -10,7 +9,6 @@
from __future__ import annotations
DOCUMENTATION = r"""
name: qubes
short_description: Interact with an existing QubesOS AppVM
@@ -41,9 +39,9 @@ options:
import subprocess
from ansible.errors import AnsibleConnectionFailure
from ansible.module_utils.common.text.converters import to_bytes
from ansible.plugins.connection import ConnectionBase, ensure_connect
from ansible.errors import AnsibleConnectionFailure
from ansible.utils.display import Display
display = Display()
@@ -54,11 +52,11 @@ class Connection(ConnectionBase):
"""This is a connection plugin for qubes: it uses qubes-run-vm binary to interact with the containers."""
# String used to identify this Connection class from other classes
transport = 'community.general.qubes'
transport = "community.general.qubes"
has_pipelining = True
def __init__(self, play_context, new_stdin, *args, **kwargs):
super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)
super().__init__(play_context, new_stdin, *args, **kwargs)
self._remote_vmname = self._play_context.remote_addr
self._connected = False
@@ -89,28 +87,29 @@ class Connection(ConnectionBase):
local_cmd.append(shell)
local_cmd = [to_bytes(i, errors='surrogate_or_strict') for i in local_cmd]
local_cmd = [to_bytes(i, errors="surrogate_or_strict") for i in local_cmd]
display.vvvv("Local cmd: ", local_cmd)
display.vvv(f"RUN {local_cmd}", host=self._remote_vmname)
p = subprocess.Popen(local_cmd, shell=False, stdin=subprocess.PIPE,
stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p = subprocess.Popen(
local_cmd, shell=False, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
# Here we are writing the actual command to the remote bash
p.stdin.write(to_bytes(cmd, errors='surrogate_or_strict'))
p.stdin.write(to_bytes(cmd, errors="surrogate_or_strict"))
stdout, stderr = p.communicate(input=in_data)
return p.returncode, stdout, stderr
def _connect(self):
"""No persistent connection is being maintained."""
super(Connection, self)._connect()
super()._connect()
self._connected = True
@ensure_connect
@ensure_connect # type: ignore # TODO: for some reason, the type infos for ensure_connect suck...
def exec_command(self, cmd, in_data=None, sudoable=False):
"""Run specified command in a running QubesVM """
super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)
"""Run specified command in a running QubesVM"""
super().exec_command(cmd, in_data=in_data, sudoable=sudoable)
display.vvvv(f"CMD IS: {cmd}")
@@ -120,25 +119,25 @@ class Connection(ConnectionBase):
return rc, stdout, stderr
def put_file(self, in_path, out_path):
""" Place a local file located in 'in_path' inside VM at 'out_path' """
super(Connection, self).put_file(in_path, out_path)
"""Place a local file located in 'in_path' inside VM at 'out_path'"""
super().put_file(in_path, out_path)
display.vvv(f"PUT {in_path} TO {out_path}", host=self._remote_vmname)
with open(in_path, "rb") as fobj:
source_data = fobj.read()
retcode, dummy, dummy = self._qubes(f'cat > "{out_path}\"\n', source_data, "qubes.VMRootShell")
retcode, dummy, dummy = self._qubes(f'cat > "{out_path}"\n', source_data, "qubes.VMRootShell")
# if qubes.VMRootShell service not supported, fallback to qubes.VMShell and
# hope it will have appropriate permissions
if retcode == 127:
retcode, dummy, dummy = self._qubes(f'cat > "{out_path}\"\n', source_data)
retcode, dummy, dummy = self._qubes(f'cat > "{out_path}"\n', source_data)
if retcode != 0:
raise AnsibleConnectionFailure(f'Failed to put_file to {out_path}')
raise AnsibleConnectionFailure(f"Failed to put_file to {out_path}")
def fetch_file(self, in_path, out_path):
"""Obtain file specified via 'in_path' from the container and place it at 'out_path' """
super(Connection, self).fetch_file(in_path, out_path)
"""Obtain file specified via 'in_path' from the container and place it at 'out_path'"""
super().fetch_file(in_path, out_path)
display.vvv(f"FETCH {in_path} TO {out_path}", host=self._remote_vmname)
# We are running in dom0
@@ -147,9 +146,9 @@ class Connection(ConnectionBase):
p = subprocess.Popen(cmd_args_list, shell=False, stdout=fobj)
p.communicate()
if p.returncode != 0:
raise AnsibleConnectionFailure(f'Failed to fetch file to {out_path}')
raise AnsibleConnectionFailure(f"Failed to fetch file to {out_path}")
def close(self):
""" Closing the connection """
super(Connection, self).close()
"""Closing the connection"""
super().close()
self._connected = False

Some files were not shown because too many files have changed in this diff Show More