Compare commits

..

33 Commits

Author SHA1 Message Date
Felix Fontein
f88b8c85d7 Release 12.4.0. 2026-02-23 17:50:05 +01:00
patchback[bot]
6385fbe038 [PR #11534/e118b23b backport][stable-12] Simplify and extend from_ini tests (#11535)
Simplify and extend from_ini tests (#11534)

Simplify and extend from_ini tests.

(cherry picked from commit e118b23ba0)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-23 06:30:35 +01:00
patchback[bot]
4b6cd41512 [PR #11462/ce7cb4e9 backport][stable-12] New module icinga2_downtime (#11532)
New module icinga2_downtime (#11462)

* feat: Icinga 2 downtime module added allowing to schedule and remove downtimes through its REST API.



* ensure compatibility with ModuleTestCase

feat: errors raised from MH now contain the changed flag
ref: move module exit out of the decorated run method



* revised module

ref: module refactored using StateModuleHelper now
ref: suggested changes by reviewer added



* revert change regarding changed flag in MH



* refactoring and set changed flag explicitly on error



* Check whether there was a state change on module failure removed.



* ref: test cases migrated to the new feature that allows passing through exceptions



* Update plugins/module_utils/icinga2.py



* Update plugins/module_utils/icinga2.py



* Update plugins/modules/icinga2_downtime.py



* ref: make module helper private



* fix: ensure that all non-null values are added to the request otherwise a `false` value is dropped



* ref: module description extended with the note that check mode is not supported



* Update plugins/modules/icinga2_downtime.py



* fix: documentation updated



* ref: documentation updated
ref: doc fragment added



* Update plugins/doc_fragments/icinga2_api.py



* ref: doc fragment renamed to `_icinga2_api.py`



* ref: maintainer to doc fragment in BOTMETA.yml added



* Update plugins/modules/icinga2_downtime.py



* Update plugins/modules/icinga2_downtime.py



* Update plugins/modules/icinga2_downtime.py



---------





(cherry picked from commit ce7cb4e914)

Signed-off-by: Fiehe Christoph  <c.fiehe@eurodata.de>
Co-authored-by: Christoph Fiehe <cfiehe@users.noreply.github.com>
Co-authored-by: Fiehe Christoph <c.fiehe@eurodata.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-02-23 06:17:51 +01:00
patchback[bot]
8c429ac69d [PR #11485/cb91ff42 backport][stable-12] Fix: avoid deprecated callback. (#11531)
Fix: avoid deprecated callback. (#11485)

* Fix: avoid deprecated callback.

* addition of changelog

* Improve changelog fragment.

---------



(cherry picked from commit cb91ff424f)

Co-authored-by: Tom Uijldert <155556120+TomUijldert@users.noreply.github.com>
Co-authored-by: tom uijldert <tom.uijldert@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-23 06:17:28 +01:00
patchback[bot]
30eb35cb95 [PR #11512/aec0e61b backport][stable-12] adds parameter delimiters to from_ini filter (#11533)
adds parameter delimiters to from_ini filter (#11512)

* adds parameter delimiters to from_ini filter

fixes issue #11506

* adds changelog fragment

* fixes pylint dangerous-default-value / W0102

* does not assume default delimiters

let that be decided in the super class

* Update plugins/filter/from_ini.py

verbose description



* Update changelogs/fragments/11512-from_ini-delimiters.yaml



* adds input validation

* adss check for delimiters not None

* adds missing import

* removes the negation

* adds suggestions from russoz

* adds ruff format suggestion

---------


(cherry picked from commit aec0e61ba1)

Co-authored-by: Robert Sander <github@gurubert.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-23 06:17:00 +01:00
Felix Fontein
33f3e7172b Prepare 12.4.0. 2026-02-22 16:39:25 +01:00
patchback[bot]
c2751dd6f5 [PR #11513/0e184d24 backport][stable-12] add support for localizationTexts in keycloak_realm.py (#11530)
add support for localizationTexts in keycloak_realm.py (#11513)

* add support for localizationTexts in keycloak_realm.py

* add changelog fragment

* change version added to next minor release

* Update changelogs/fragments/11513-keycloak-realm-localizationTexts-support.yml



* Update plugins/modules/keycloak_realm.py



---------


(cherry picked from commit 0e184d24cf)

Co-authored-by: nwintering <33374766+nwintering@users.noreply.github.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-02-21 23:07:08 +01:00
patchback[bot]
d3dd685ad4 [PR #11515/7cd75945 backport][stable-12] #11502 Fix mapping of config of keycloak_user_federation (#11529)
#11502 Fix mapping of config of keycloak_user_federation (#11515)

* #11502 Fix mapping of config

Fix mapping of config

Fix diff for mappers

* Fix formatting with nox

* Update changelogs/fragments/11502-keycloak-config-mapper.yaml



* Remove duplicate comment
https://github.com/ansible-collections/community.general/pull/11515#discussion_r2821444756

---------


(cherry picked from commit 7cd75945b2)

Co-authored-by: mixman68 <greg.djg13@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-21 12:11:19 +01:00
patchback[bot]
696b6e737a [PR #11523/1ae058db backport][stable-12] reduce collection build time with build_ignore (#11528)
reduce collection build time with build_ignore (#11523)

* reduce build time with build_ignore



* just ignore .nox



---------


(cherry picked from commit 1ae058db63)

Signed-off-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>
Co-authored-by: Thomas Sjögren <konstruktoid@users.noreply.github.com>
2026-02-21 11:43:25 +01:00
patchback[bot]
45d16053ee [PR #10306/38f93c80 backport][stable-12] New Callback plugin: loganalytics_ingestion adding Azure Log Analytics Ingestion (#11527)
New Callback plugin: `loganalytics_ingestion` adding Azure Log Analytics Ingestion (#10306)

* Add Azure Log Analytics Ingestion API plugin

The Ingestion API allows sending data to a Log Analytics workspace in
Azure Monitor.

* Fix LogAnalytics Ingestion shebang

* Fix Log Analytics Ingestion pep8 tests

* Fix Log Analytics Ingestion pylint tests

* Fix Log Analytics Ingestion import tests

* Fix Log Analytics Ingestion pylint test

* Add Log Analytics Ingestion auth timeout

Previous behavior was to use the 'request' module's default timeout;
this makes auth timeout value consistent with the task submission
timeout value.

* Display Log Analytics Ingestion event data as JSON

Previous behavior was to display the data as a Python dictionary.
The new behavior makes it easier to generate a sample JSON file in order
to import into Azure when creating the table.

* Add Azure Log Analytics Ingestion timeout param

This parameter controls how long the plugin will wait for an HTTP response
from the Azure Log Analytics API before considering the request a failure.
Previous behavior was hardcoded to 2 seconds.

* Fix Azure Log Ingestion unit test

The class instantiation was missing an additional argument that was added
in a previous patch; add it.  Converting to JSON also caused the Mock
TaskResult object to throw a serialization error; override the function
for JSON conversion to just return bogus data instead.

* Fix loganalytics_ingestion linter errors

* Fix LogAnalytics Ingestion env vars

Prefix the LogAnalytics Ingestion plugin's environment variable names
with 'ANSIBLE_' in order to align with plugin best practices.

* Remove LogAnalytics 'requests' dep from docs

The LogAnalytics callback plugin does not actually require 'requests',
so remove it from the documented dependencies.

* Refactor LogAnalytics Ingestion to use URL utils

This replaces the previous behavior of depending on the external
'requests' library.

* Simplify LogAnalytics Ingestion token valid check



* Remove LogAnalytics Ingestion extra arg validation

Argument validation should be handled by ansible-core, so remove the
extra argument validation in the plugin itself.

* Update LogAnalytics Ingestion version added

* Remove LogAnalytics Ingestion coding marker

The marker is no longer needed as Python2 is no longer supported.

* Fix some LogAnalytics Ingestion grammar errors

* Refactor LogAnalytics Ingestion plugin messages

Consistently use "plugin" instead of module, and refer to the module by
its FQCN instead of its prose name.

* Remove LogAnalytics Ingestion extra logic

A few unused vars were being set; stop setting them.

* Fix LogAnalytics Ingestion nox sanity tests

* Fix LogAnalytics Ingestion unit tests

The refactor to move away from the 'requests' dependency to use
module_utils broke the plugin's unit tests; re-write the plugin's unit
tests for module_utils.

* Add nox formatting to LogAnalytics Ingestion

* Fix Log Analytics Ingestion urllib import

Remove the compatibility import via 'six' for 'urllib' since Python 2
support is no longer supported.

* Bump LogAnalytics Ingestion plugin version added

* Remove LogAnalytics Ingestion required: false docs

Required being false is the default, so no need to explicitly add it.

* Simplify LogAnalytics Ingestion role name logic

* Clean LogAnalytics Ingestion redundant comments

* Clean LogAnalytics Ingestion unit test code

Rename all Mock objects to use snake_case and consistently use '_mock'
as a suffix instead of sometimes using it as a prefix and sometimes
using it as a suffix.

* Refactor LogAnalytics Ingestion unit tests

Move all of the tests outside of the 'setUp' method.

* Refactor LogAnalytics Ingestion test

Add a test to validate that part of the contents sent match what was
supposed to be sent.

* Refactor LogAnalytics Ingestion test

Make the names consistent again.

* Add LogAnalytics Ingestion sample data docs

* Apply suggestions from code review



---------


(cherry picked from commit 38f93c80f1)

Co-authored-by: wtcline-intc <wade.cline@intel.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-21 11:43:16 +01:00
patchback[bot]
1d4fd21702 [PR #11471/34938ca1 backport][stable-12] keycloak_user_rolemapping: handle None response for client role lookup (#11522)
keycloak_user_rolemapping: handle None response for client role lookup (#11471)

* fix(keycloak_user_rolemapping): handle None response for client role lookup

When adding a client role to a user who has no existing roles for that
client, get_client_user_rolemapping_by_id() returns None. The existing
code indexed directly into the result causing a TypeError. Add the same
None check that already existed for realm roles since PR #11256.

Fixes #10960

* fix(tests): use dict format for task vars in keycloak_user_rolemapping tests

Task-level vars requires a YAML mapping, not a sequence. The leading
dash (- roles:) produced a list instead of a dict, which ansible-core
2.20 rejects with "Vars in a Task must be specified as a dictionary".

* Update changelogs/fragments/keycloak-user-rolemapping-client-none-check.yml



---------


(cherry picked from commit 34938ca1ef)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-18 20:50:15 +01:00
patchback[bot]
bfcdeeab91 [PR #11468/80d21f2a backport][stable-12] keycloak_realm_key: add full support for all Keycloak key providers (#11519)
keycloak_realm_key: add full support for all Keycloak key providers (#11468)

* feat(keycloak_realm_key): add support for auto-generated key providers

Add support for Keycloak's auto-generated key providers where Keycloak
manages the key material automatically:

- rsa-generated: Auto-generates RSA signing keys
- hmac-generated: Auto-generates HMAC signing keys
- aes-generated: Auto-generates AES encryption keys
- ecdsa-generated: Auto-generates ECDSA signing keys

New algorithms:
- HMAC: HS256, HS384, HS512
- ECDSA: ES256, ES384, ES512
- AES: AES (no algorithm parameter needed)

New config options:
- secret_size: For HMAC/AES providers (key size in bytes)
- key_size: For RSA-generated provider (key size in bits)
- elliptic_curve: For ECDSA-generated provider (P-256, P-384, P-521)

Changes:
- Make private_key/certificate optional (only required for rsa/rsa-enc)
- Add provider-algorithm validation with clear error messages
- Fix KeyError when managing default realm keys (issue #11459)
- Maintain backward compatibility: RS256 default works for rsa/rsa-generated

Fixes: #11459

* fix: address sanity test failures

- Add 'default: RS256' to algorithm documentation to match spec
- Add no_log=True to secret_size parameter per sanity check

* feat(keycloak_realm_key): extend support for all Keycloak key providers

Add support for remaining auto-generated key providers:
- rsa-enc-generated (RSA encryption keys with RSA1_5, RSA-OAEP, RSA-OAEP-256)
- ecdh-generated (ECDH key exchange with ECDH_ES, ECDH_ES_A128KW/A192KW/A256KW)
- eddsa-generated (EdDSA signing with Ed25519, Ed448 curves)

Changes:
- Add provider-specific elliptic curve config key mapping
  (ecdsaEllipticCurveKey, ecdhEllipticCurveKey, eddsaEllipticCurveKey)
- Add PROVIDERS_WITHOUT_ALGORITHM constant for providers that don't need algorithm
- Add elliptic curve validation per provider type
- Update documentation with all supported algorithms and examples
- Add comprehensive integration tests for all new providers

This completes full coverage of all Keycloak key provider types.

* style: apply ruff formatting

* feat(keycloak_realm_key): add java-keystore provider and update_password

Add support for java-keystore provider to import keys from Java
Keystore (JKS or PKCS12) files on the Keycloak server filesystem.

Add update_password parameter to control password handling for
java-keystore provider:
- always (default): Always send passwords to Keycloak
- on_create: Only send passwords when creating, preserve existing
  passwords when updating (enables idempotent playbooks)

The on_create mode sends the masked value ("**********") that Keycloak
recognizes as "preserve existing password", matching the behavior when
re-importing an exported realm.

Replace password_checksum with update_password - the checksum approach
was complex and error-prone. The update_password parameter is simpler
and follows the pattern used by ansible.builtin.user module.

Also adds key_info return value containing kid, certificate fingerprint,
status, and expiration for java-keystore keys.

* address PR review feedback

- Remove no_log=True from secret_size (just an int, not sensitive)
- Add version_added: 12.4.0 to new parameters and return values
- Remove "Added in community.general 12.4.0" from description text
- Consolidate changelog entries into 4 focused entries
- Remove bugfix from changelog (now in separate PR #11470)

* address review feedback from russoz and felixfontein

- remove docstrings from module-local helpers
- remove line-by-line comments and unnecessary null guard
- use specific exceptions instead of bare except Exception
- use module.params["key"] instead of .get("key")
- consolidate changelog into single entry
- avoid "complete set" claim, reference Keycloak 26 instead

* address round 2 review feedback

- Extract remove_sensitive_config_keys() helper (DRY refactor)
- Simplify RS256 validation to single code path
- Add TypeError to inner except in compute_certificate_fingerprint()
- Remove redundant comments (L812, L1031)
- Switch .get() to direct dict access for module.params

(cherry picked from commit 80d21f2a0d)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
2026-02-18 18:36:48 +01:00
patchback[bot]
5dcb3b8f59 [PR #10841/986118c0 backport][stable-12] keycloak_realm_localization: new module - realm localization control (#11517)
keycloak_realm_localization: new module - realm localization control (#10841)

* add support for management of keycloak localizations

* unit test for keycloak localization support

* keycloak_realm_localization botmeta record

* rev: improvements after code review

(cherry picked from commit 986118c0af)

Co-authored-by: Jakub Danek <danekja@users.noreply.github.com>
2026-02-18 07:44:44 +01:00
patchback[bot]
42c20a754b [PR #11488/5e0fd120 backport][stable-12] ModuleHelper: ensure compatibility with ModuleTestCase (#11518)
ModuleHelper: ensure compatibility with `ModuleTestCase` (#11488)

* ModuleHelper: ensure compatibility with `ModuleTestCase`.

This change allows to configure the `module_fails_on_exception` decorator by passing a tuple of exception types that should not be handled by the decorator itself. In the context of `ModuleTestCase`, use `(AnsibleExitJson, AnsibleFailJson)` to let them pass through the decorator without modification.



* Another approach allowing user-defined exception types to pass through the decorator. When the decorator should have no arguments at all, we must hard code the name of the attribute that is looked up on self.



* Approach that removes decorator parametrization and relies on an object/class variable named `unhandled_exceptions`.



* context manager implemented that allows to pass through some exception types



* Update changelogs/fragments/11488-mh-ensure-compatibiliy-with-module-tests.yml



* Exception placeholder added



---------




(cherry picked from commit 5e0fd1201c)

Signed-off-by: Fiehe Christoph  <c.fiehe@eurodata.de>
Co-authored-by: Christoph Fiehe <cfiehe@users.noreply.github.com>
Co-authored-by: Fiehe Christoph <c.fiehe@eurodata.de>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-18 07:26:47 +01:00
patchback[bot]
75b6b4d792 [PR #11461/4bbedfd7 backport][stable-12] nsupdate: fix missing keyring initialization without TSIG auth (#11516)
nsupdate: fix missing keyring initialization without TSIG auth (#11461)

* nsupdate: fix missing keyring initialization without TSIG auth

* Update changelogs/fragments/fix-nsupdate-keyring.yml



---------


(cherry picked from commit 4bbedfd7df)

Co-authored-by: Pascal <pascal.guinet@free.fr>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-18 06:57:33 +01:00
patchback[bot]
a0c4308bed [PR #11503/85a0deee backport][stable-12] keycloak module utils: group search optimization (#11511)
keycloak module utils: group search optimization (#11503)

* Updated get_group_by_name with a query based lookup for improved speed

* Add changelog fragment for keycloak group search optimization

* Address review feedback: update changelog text and reformat code with ruff

* improved changelog fragment

* Update changelogs/fragments/11503-keycloak-group-search-optimization.yml



---------


(cherry picked from commit 85a0deeeba)

Co-authored-by: Andreas Wegmann <andreas.we9mann@gmail.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-14 21:14:52 +01:00
patchback[bot]
6437fe15c8 [PR #11486/c05c3133 backport][stable-12] seport: Add support for dccp and sctp protocols (#11509)
seport: Add support for dccp and sctp protocols (#11486)

Support for dccp and sctp protocols were added to SELinux userspace
python libraries in 3.0 version release in November 2019.

(cherry picked from commit c05c31334b)

Co-authored-by: Petr Lautrbach <lautrbach@redhat.com>
2026-02-14 21:14:44 +01:00
patchback[bot]
baddfa5a80 [PR #11501/ed7ccbe3 backport][stable-12] maven_artifact: resolve SNAPSHOT to latest using snapshot metadata block (#11508)
maven_artifact: resolve SNAPSHOT to latest using snapshot metadata block (#11501)

* fix(maven_artifact): resolve SNAPSHOT to latest using snapshot metadata block

Prefer the <snapshot> block (timestamp + buildNumber) from maven-metadata.xml
which always points to the latest build, instead of scanning <snapshotVersions>
and returning on the first match. Repositories like GitHub Packages keep all
historical entries in <snapshotVersions> (oldest first), causing the module to
resolve to the oldest snapshot instead of the latest.

Fixes #5117
Fixes #11489

* fix(maven_artifact): address review feedback

- Check both timestamp and buildNumber before using snapshot block,
  preventing IndexError when buildNumber is missing
- Remove unreliable snapshotVersions scanning fallback; use literal
  -SNAPSHOT version for non-unique snapshot repos instead
- Add tests for incomplete snapshot block and non-SNAPSHOT versions

* fix(maven_artifact): restore snapshotVersions scanning with last-match

Restore <snapshotVersions> scanning as primary resolution (needed for
per-extension accuracy per MNG-5459), but collect the last match instead
of returning on the first. Fall back to <snapshot> block when no
<snapshotVersions> match is found, then to literal -SNAPSHOT version.

* docs: update changelog fragment to match final implementation

* fix(maven_artifact): use updated timestamp for snapshot resolution

Use the <updated> attribute to select the newest snapshotVersion entry
instead of relying on list order. This works independently of how the
repository manager sorts entries in maven-metadata.xml.

Also fix test docstring and update changelog fragment per reviewer
feedback.

* test(maven_artifact): shuffle entries to verify updated timestamp sorting

Reorder snapshotVersion entries so the newest JAR is in the middle,
not at the end. This ensures the test actually validates that resolution
uses the <updated> timestamp rather than relying on list position.

(cherry picked from commit ed7ccbe3d4)

Co-authored-by: Adam R. <ariwk@protonmail.com>
2026-02-14 21:14:36 +01:00
patchback[bot]
b7d1483a08 [PR #11500/c9313af9 backport][stable-12] keycloak_identity_provider: add claims example for oidc-advanced-group-idp-mapper (#11507)
keycloak_identity_provider: add claims example for oidc-advanced-group-idp-mapper (#11500)

Add claims example for oidc-advanced-group-idp-mapper

For me it wasn't clear how to create claims using oidc-advanced-group-idp-mapper, perhaps other people can benefit from the following example.

(cherry picked from commit c9313af971)

Co-authored-by: David Filipe <68902816+daveopz@users.noreply.github.com>
2026-02-14 21:14:17 +01:00
patchback[bot]
b87121e1eb [PR #11504/8729f563 backport][stable-12] Update check_availability_service to return data instead of boolean (#11510)
Update check_availability_service to return data instead of boolean (#11504)

* Update check_availability_service to return data instead of boolean

* Add changelog fragment

(cherry picked from commit 8729f563b3)

Co-authored-by: Scott Seekamp <13857911+sseekamp@users.noreply.github.com>
2026-02-14 21:14:07 +01:00
patchback[bot]
cb17703c36 [PR #11495/88adca3f backport][stable-12] python_requirements_info: use importlib.metadata when available (#11496)
python_requirements_info: use importlib.metadata when available (#11495)

Use importlib.metadata when available.

(cherry picked from commit 88adca3fb4)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-11 07:12:04 +01:00
patchback[bot]
05d457dca7 [PR #11484/63ddca7f backport][stable-12] supervisorctl: remove unstable tag from integration tests (#11494)
supervisorctl: remove unstable tag from integration tests (#11484)

(cherry picked from commit 63ddca7f21)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-02-10 21:51:31 +01:00
patchback[bot]
7fce59fbc6 [PR #11479/476f2bf6 backport][stable-12] Integration tests: replace ansible_xxx with ansible_facts.xxx (#11480)
Integration tests: replace ansible_xxx with ansible_facts.xxx (#11479)

Replace ansible_xxx with ansible_facts.xxx.

(cherry picked from commit 476f2bf641)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-07 18:43:49 +01:00
patchback[bot]
de6967d3ff [PR #11473/df6d6269 backport][stable-12] keycloak_client: add valid_post_logout_redirect_uris and backchannel_logout_url (#11475)
keycloak_client: add valid_post_logout_redirect_uris and backchannel_logout_url (#11473)

* feat(keycloak_client): add valid_post_logout_redirect_uris and backchannel_logout_url

Add two new convenience parameters that map to client attributes:

- valid_post_logout_redirect_uris: sets post.logout.redirect.uris
  attribute (list items joined with ##)
- backchannel_logout_url: sets backchannel.logout.url attribute

These fields are not top-level in the Keycloak REST API but are stored
as client attributes. The new parameters provide a user-friendly
interface without requiring users to know the internal attribute names
and ##-separator format.

Fixes #6812, fixes #4892

* consolidate changelog and add PR link per review feedback

(cherry picked from commit df6d6269a6)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
2026-02-07 16:34:46 +01:00
patchback[bot]
bbb9b03b5e [PR #11464/8b0ce3e2 backport][stable-12] community.general.copr: clarify includepkgs/excludepkgs (#11476)
community.general.copr: clarify includepkgs/excludepkgs (#11464)

At first glance, includepkgs seems to be something that would install
the package name from the given copr repo.  This isn't helped by the
example that says "Install caddy" which very much looks like it is
installing the package from the repo.  Not only did I, a human,
hallucinate this behaviour, so did a large search engine's AI
responses to related queries.

In fact these are labels to vary what packages DNF sees.  Clarify this
by using wording and examples closer to the upstream documentation [1]

[1] https://dnf.readthedocs.io/en/latest/conf_ref.html

(cherry picked from commit 8b0ce3e28f)

Co-authored-by: Ian Wienand <ian@wienand.org>
2026-02-07 16:34:38 +01:00
patchback[bot]
a0d6487f6d [PR #11455/af4dbafe backport][stable-12] keycloak_client: fix diff for keycloak client auth flow overrides (#11477)
keycloak_client: fix diff for keycloak client auth flow overrides (#11455)

* 11430: fix diff for keycloak client auth flow overrides

* 11430: add changelog fragment

* 11430: move util function merge_settings_without_absent_nulls to the util functions file _keycloak_utils

* 11443: code cleanup

---------


(cherry picked from commit af4dbafe86)

Co-authored-by: thomasbargetz <thomas.bargetz@gmail.com>
Co-authored-by: Thomas Bargetz <thomas.bargetz@rise-world.com>
2026-02-07 16:34:29 +01:00
patchback[bot]
88bfb6dda3 [PR #11470/10681731 backport][stable-12] keycloak_realm_key: handle missing config fields for default keys (#11478)
keycloak_realm_key: handle missing config fields for default keys (#11470)

* fix(keycloak_realm_key): handle missing config fields for default keys

Keycloak API may not return 'active', 'enabled', or 'algorithm' fields
in the config response for default/auto-generated realm keys. This caused
a KeyError when the module tried to compare these fields during state
detection.

Use .get() with the expected value as default to handle missing fields
gracefully, treating them as unchanged if not present in the API response.

Fixes: #11459

* add PR link to changelog entry per review feedback

(cherry picked from commit 106817316d)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
2026-02-07 16:34:22 +01:00
patchback[bot]
d637db7623 [PR #11472/c41de53d backport][stable-12] keycloak: URL-encode query parameters for usernames with special characters (#11474)
keycloak: URL-encode query parameters for usernames with special characters (#11472)

* fix(keycloak): URL-encode query params for usernames with special chars

get_user_by_username() concatenates the username directly into the URL
query string. When the username contains a +, it is interpreted as a
space by the server, returning no match and causing a TypeError.

Use urllib.parse.quote() (already imported) for the username parameter.
Also replace three fragile .replace(' ', '%20') calls in the authz
search methods with proper quote() calls.

Fixes #10305

* Update changelogs/fragments/keycloak-url-encode-query-params.yml



---------


(cherry picked from commit c41de53dbb)

Co-authored-by: Ivan Kokalovic <67540157+koke1997@users.noreply.github.com>
Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-02-06 20:36:02 +01:00
patchback[bot]
2198588afa [PR #11454/b236772e backport][stable-12] keycloak_client: remove id's as change from diff for protocol mappers (#11469)
keycloak_client: remove id's as change from diff for protocol mappers (#11454)

* 11453 remove id's as change from diff for protocol mappers

* Update changelogs/fragments/11453-keycloak-client-protocol-mapper-ids.yml



---------


(cherry picked from commit b236772e57)

Co-authored-by: Simon Moosbrugger <707958+simonmoosbrugger@users.noreply.github.com>
Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-05 17:29:29 +01:00
Felix Fontein
9d6db6002c Add latest commit to .git-blame-ignore-revs.
(cherry picked from commit bce87a2a77)
2026-02-04 09:04:37 +01:00
patchback[bot]
dd9c86dfc0 [PR #11465/24098cd6 backport][stable-12] Reformat code (#11466)
Reformat code (#11465)

Reformat code.

(cherry picked from commit 24098cd638)

Co-authored-by: Felix Fontein <felix@fontein.de>
2026-02-04 09:04:00 +01:00
patchback[bot]
a266ba1d6e [PR #11457/95b24ac3 backport][stable-12] jboss: deprecation (#11458)
jboss: deprecation (#11457)

(cherry picked from commit 95b24ac3fe)

Co-authored-by: Alexei Znamensky <103110+russoz@users.noreply.github.com>
2026-01-31 10:03:36 +01:00
Felix Fontein
4167d8ebeb The next expected release will be 12.4.0. 2026-01-26 19:00:03 +01:00
195 changed files with 5127 additions and 815 deletions

View File

@@ -12,3 +12,4 @@ eaa5e07b2866e05b6c7b5628ca92e9cb1142d008
340ff8586d4f1cb6a0f3c934eb42589bcc29c0ea
e530d2906a1f61df89861286ac57c951a247f32c
b769b0bc01520d12699d3911e1fc290b813cde40
dd9c86dfc094131f223ffb59e5a3d9f2dfc5875d

11
.github/BOTMETA.yml vendored
View File

@@ -65,6 +65,9 @@ files:
$callbacks/log_plays.py: {}
$callbacks/loganalytics.py:
maintainers: zhcli
$callbacks/loganalytics_ingestion.py:
ignore: zhcli
maintainers: pboushy vsh47 wtcline-intc
$callbacks/logdna.py: {}
$callbacks/logentries.py: {}
$callbacks/logstash.py:
@@ -133,6 +136,8 @@ files:
$doc_fragments/hwc.py:
labels: hwc
maintainers: $team_huawei
$doc_fragments/_icinga2_api.py:
maintainers: cfiehe
$doc_fragments/nomad.py:
maintainers: chris93111 apecnascimento
$doc_fragments/pipx.py:
@@ -362,6 +367,8 @@ files:
keywords: cloud huawei hwc
labels: huawei hwc_utils networking
maintainers: $team_huawei
$module_utils/_icinga2.py:
maintainers: cfiehe
$module_utils/identity/keycloak/keycloak.py:
maintainers: $team_keycloak
$module_utils/identity/keycloak/keycloak_clientsecret.py:
@@ -713,6 +720,8 @@ files:
maintainers: $team_huawei huaweicloud
$modules/ibm_sa_:
maintainers: tzure
$modules/icinga2_downtime.py:
maintainers: cfiehe
$modules/icinga2_feature.py:
maintainers: nerzhul
$modules/icinga2_host.py:
@@ -857,6 +866,8 @@ files:
maintainers: fynncfchen
$modules/keycloak_realm_key.py:
maintainers: mattock
$modules/keycloak_realm_localization.py:
maintainers: danekja
$modules/keycloak_role.py:
maintainers: laurpaum
$modules/keycloak_user.py:

View File

@@ -158,6 +158,8 @@ ignore_missing_imports = True
ignore_missing_imports = True
[mypy-pingdom.*]
ignore_missing_imports = True
[mypy-pkg_resources.*]
ignore_missing_imports = True
[mypy-portage.*]
ignore_missing_imports = True
[mypy-potatoes_that_will_never_be_there.*]

File diff suppressed because one or more lines are too long

View File

@@ -6,6 +6,55 @@ Community General Release Notes
This changelog describes changes after version 11.0.0.
v12.4.0
=======
Release Summary
---------------
Regular bugfix and feature release.
Minor Changes
-------------
- ModuleHelper module utils - allow to ignore specific exceptions in ``module_fails_on_exception`` decorator (https://github.com/ansible-collections/community.general/pull/11488).
- from_ini filter plugin - add ``delimiters`` parameter to allow correctly parsing more INI documents (https://github.com/ansible-collections/community.general/issues/11506, https://github.com/ansible-collections/community.general/pull/11512).
- keycloak_client - add ``valid_post_logout_redirect_uris`` option to configure post logout redirect URIs for a client, and ``backchannel_logout_url`` option to configure the backchannel logout URL for a client (https://github.com/ansible-collections/community.general/issues/6812, https://github.com/ansible-collections/community.general/issues/4892, https://github.com/ansible-collections/community.general/pull/11473).
- keycloak_client_rolemapping, keycloak_realm_rolemapping, keycloak_group - optimize retrieval of groups by name to use Keycloak search API with exact matching instead of fetching all groups (https://github.com/ansible-collections/community.general/pull/11503).
- keycloak_realm - add support for ``localizationTexts`` option in Keycloak realms (https://github.com/ansible-collections/community.general/pull/11513).
- keycloak_realm_key - add support for auto-generated key providers (``rsa-generated``, ``rsa-enc-generated``, ``hmac-generated``, ``aes-generated``, ``ecdsa-generated``, ``ecdh-generated``, ``eddsa-generated``), ``java-keystore`` provider, additional algorithms (HMAC, ECDSA, ECDH, EdDSA, AES), and new config options (``secret_size``, ``key_size``, ``elliptic_curve``, ``keystore``, ``keystore_password``, ``key_alias``, ``key_password``). Also makes ``config.private_key`` and ``config.certificate`` optional as they are only required for imported key providers (https://github.com/ansible-collections/community.general/pull/11468).
- redfish_info - add Redfish Root data to results of successful ``CheckAvailability`` command (https://github.com/ansible-collections/community.general/pull/11504).
- seport - adds support for DCCP and SCTP protocols (https://github.com/ansible-collections/community.general/pull/11486).
Bugfixes
--------
- keycloak module utils - fix ``TypeError`` crash when managing users whose username or email contains special characters such as ``+`` (https://github.com/ansible-collections/community.general/issues/10305, https://github.com/ansible-collections/community.general/pull/11472).
- keycloak module utils - use proper URL encoding (``urllib.parse.quote``) for query parameters in authorization permission name searches, replacing fragile manual space replacement (https://github.com/ansible-collections/community.general/pull/11472).
- keycloak_client - fix idempotency bug caused by ``null`` flow overrides value differences for non-existing flow overrides (https://github.com/ansible-collections/community.general/issues/11430, https://github.com/ansible-collections/community.general/pull/11455).
- keycloak_client - remove IDs as change from diff result for protocol mappers (https://github.com/ansible-collections/community.general/issues/11453, https://github.com/ansible-collections/community.general/pull/11454).
- keycloak_realm_key - fix ``KeyError`` crash when managing realm keys where Keycloak does not return ``active``, ``enabled``, or ``algorithm`` fields in the config response (https://github.com/ansible-collections/community.general/issues/11459, https://github.com/ansible-collections/community.general/pull/11470).
- keycloak_user_federation - mapper config item can be an array (https://github.com/ansible-collections/community.general/issues/11502, https://github.com/ansible-collections/community.general/pull/11515).
- keycloak_user_rolemapping - fix ``TypeError`` crash when adding a client role to a user who has no existing roles for that client (https://github.com/ansible-collections/community.general/issues/10960, https://github.com/ansible-collections/community.general/pull/11471).
- maven_artifact - fix SNAPSHOT version resolution to pick the newest matching ``<snapshotVersion>`` entry by ``<updated>`` timestamp instead of the first. Repositories like GitHub Packages keep all historical entries in ``<snapshotVersions>`` (oldest first), causing the module to resolve to the oldest snapshot instead of the latest (https://github.com/ansible-collections/community.general/issues/5117, https://github.com/ansible-collections/community.general/issues/11489, https://github.com/ansible-collections/community.general/pull/11501).
- nsupdate - fix ``AttributeError`` when using the module without TSIG authentication (https://github.com/ansible-collections/community.general/issues/11460, https://github.com/ansible-collections/community.general/pull/11461).
- python_requirements_info - use ``importlib.metadata`` if ``pkg_resources`` from ``setuptools`` cannot be imported. That module has been removed from setuptools 82.0.0 (https://github.com/ansible-collections/community.general/issues/11491, https://github.com/ansible-collections/community.general/pull/11492).
- splunk callback plugin - replace deprecated callback function (https://github.com/ansible-collections/community.general/pull/11485).
New Plugins
-----------
Callback
~~~~~~~~
- community.general.loganalytics_ingestion - Posts task results to an Azure Log Analytics workspace using the new Logs Ingestion API.
New Modules
-----------
- community.general.icinga2_downtime - Manages Icinga 2 downtimes.
- community.general.keycloak_realm_localization - Allows management of Keycloak realm localization overrides via the Keycloak API.
v12.3.0
=======

View File

@@ -1787,3 +1787,102 @@ releases:
name: to_toml
namespace: null
release_date: '2026-01-26'
12.4.0:
changes:
bugfixes:
- keycloak module utils - fix ``TypeError`` crash when managing users whose
username or email contains special characters such as ``+`` (https://github.com/ansible-collections/community.general/issues/10305,
https://github.com/ansible-collections/community.general/pull/11472).
- keycloak module utils - use proper URL encoding (``urllib.parse.quote``)
for query parameters in authorization permission name searches, replacing
fragile manual space replacement (https://github.com/ansible-collections/community.general/pull/11472).
- keycloak_client - fix idempotency bug caused by ``null`` flow overrides
value differences for non-existing flow overrides (https://github.com/ansible-collections/community.general/issues/11430,
https://github.com/ansible-collections/community.general/pull/11455).
- keycloak_client - remove IDs as change from diff result for protocol mappers
(https://github.com/ansible-collections/community.general/issues/11453,
https://github.com/ansible-collections/community.general/pull/11454).
- keycloak_realm_key - fix ``KeyError`` crash when managing realm keys where
Keycloak does not return ``active``, ``enabled``, or ``algorithm`` fields
in the config response (https://github.com/ansible-collections/community.general/issues/11459,
https://github.com/ansible-collections/community.general/pull/11470).
- keycloak_user_federation - mapper config item can be an array (https://github.com/ansible-collections/community.general/issues/11502,
https://github.com/ansible-collections/community.general/pull/11515).
- keycloak_user_rolemapping - fix ``TypeError`` crash when adding a client
role to a user who has no existing roles for that client (https://github.com/ansible-collections/community.general/issues/10960,
https://github.com/ansible-collections/community.general/pull/11471).
- maven_artifact - fix SNAPSHOT version resolution to pick the newest matching
``<snapshotVersion>`` entry by ``<updated>`` timestamp instead of the first.
Repositories like GitHub Packages keep all historical entries in ``<snapshotVersions>``
(oldest first), causing the module to resolve to the oldest snapshot instead
of the latest (https://github.com/ansible-collections/community.general/issues/5117,
https://github.com/ansible-collections/community.general/issues/11489, https://github.com/ansible-collections/community.general/pull/11501).
- nsupdate - fix ``AttributeError`` when using the module without TSIG authentication
(https://github.com/ansible-collections/community.general/issues/11460,
https://github.com/ansible-collections/community.general/pull/11461).
- python_requirements_info - use ``importlib.metadata`` if ``pkg_resources``
from ``setuptools`` cannot be imported. That module has been removed from
setuptools 82.0.0 (https://github.com/ansible-collections/community.general/issues/11491,
https://github.com/ansible-collections/community.general/pull/11492).
- splunk callback plugin - replace deprecated callback function (https://github.com/ansible-collections/community.general/pull/11485).
minor_changes:
- ModuleHelper module utils - allow to ignore specific exceptions in ``module_fails_on_exception``
decorator (https://github.com/ansible-collections/community.general/pull/11488).
- from_ini filter plugin - add ``delimiters`` parameter to allow correctly
parsing more INI documents (https://github.com/ansible-collections/community.general/issues/11506,
https://github.com/ansible-collections/community.general/pull/11512).
- keycloak_client - add ``valid_post_logout_redirect_uris`` option to configure
post logout redirect URIs for a client, and ``backchannel_logout_url`` option
to configure the backchannel logout URL for a client (https://github.com/ansible-collections/community.general/issues/6812,
https://github.com/ansible-collections/community.general/issues/4892, https://github.com/ansible-collections/community.general/pull/11473).
- keycloak_client_rolemapping, keycloak_realm_rolemapping, keycloak_group
- optimize retrieval of groups by name to use Keycloak search API with exact
matching instead of fetching all groups (https://github.com/ansible-collections/community.general/pull/11503).
- keycloak_realm - add support for ``localizationTexts`` option in Keycloak
realms (https://github.com/ansible-collections/community.general/pull/11513).
- keycloak_realm_key - add support for auto-generated key providers (``rsa-generated``,
``rsa-enc-generated``, ``hmac-generated``, ``aes-generated``, ``ecdsa-generated``,
``ecdh-generated``, ``eddsa-generated``), ``java-keystore`` provider, additional
algorithms (HMAC, ECDSA, ECDH, EdDSA, AES), and new config options (``secret_size``,
``key_size``, ``elliptic_curve``, ``keystore``, ``keystore_password``, ``key_alias``,
``key_password``). Also makes ``config.private_key`` and ``config.certificate``
optional as they are only required for imported key providers (https://github.com/ansible-collections/community.general/pull/11468).
- redfish_info - add Redfish Root data to results of successful ``CheckAvailability``
command (https://github.com/ansible-collections/community.general/pull/11504).
- seport - adds support for DCCP and SCTP protocols (https://github.com/ansible-collections/community.general/pull/11486).
release_summary: Regular bugfix and feature release.
fragments:
- 11430-fix-keycloak-client-diff-for-flow-overrides.yml
- 11453-keycloak-client-protocol-mapper-ids.yml
- 11485-avoid-deprected-callback.yml
- 11486-seport-dccp-sctp.yaml
- 11488-mh-ensure-compatibiliy-with-module-tests.yml
- 11492-python_requires_info.yml
- 11502-keycloak-config-mapper.yaml
- 11503-keycloak-group-search-optimization.yml
- 11504-redfish-info-add-results-to-return.yml
- 11512-from_ini-delimiters.yaml
- 11513-keycloak-realm-localizationTexts-support.yml
- 12.4.0.yml
- 5117-maven-artifact-snapshot-resolution.yml
- fix-nsupdate-keyring.yml
- keycloak-client-add-missing-fields.yml
- keycloak-realm-key-generated-providers.yml
- keycloak-realm-key-keyerror-bugfix.yml
- keycloak-url-encode-query-params.yml
- keycloak-user-rolemapping-client-none-check.yml
modules:
- description: Manages Icinga 2 downtimes.
name: icinga2_downtime
namespace: ''
- description: Allows management of Keycloak realm localization overrides via
the Keycloak API.
name: keycloak_realm_localization
namespace: ''
plugins:
callback:
- description: Posts task results to an Azure Log Analytics workspace using
the new Logs Ingestion API.
name: loganalytics_ingestion
namespace: null
release_date: '2026-02-23'

View File

@@ -5,7 +5,7 @@
namespace: community
name: general
version: 12.3.0
version: 12.4.0
readme: README.md
authors:
- Ansible (https://github.com/ansible)
@@ -19,3 +19,5 @@ repository: https://github.com/ansible-collections/community.general
documentation: https://docs.ansible.com/projects/ansible/latest/collections/community/general/
homepage: https://github.com/ansible-collections/community.general
issues: https://github.com/ansible-collections/community.general/issues
build_ignore:
- .nox

View File

@@ -40,6 +40,7 @@ action_groups:
- keycloak_realm
- keycloak_realm_key
- keycloak_realm_keys_metadata_info
- keycloak_realm_localization
- keycloak_realm_rolemapping
- keycloak_role
- keycloak_user
@@ -378,6 +379,10 @@ plugin_routing:
warning_text: Use community.general.idrac_redfish_info instead.
idrac_server_config_profile:
redirect: dellemc.openmanage.idrac_server_config_profile
jboss:
deprecation:
removal_version: 14.0.0
warning_text: Use role middleware_automation.wildfly.wildfly_app_deploy instead.
jenkins_job_facts:
tombstone:
removal_version: 3.0.0

View File

@@ -0,0 +1,340 @@
#!/usr/bin/env python
# Copyright (c) Ansible project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
DOCUMENTATION = """
name: loganalytics_ingestion
type: notification
short_description: Posts task results to an Azure Log Analytics workspace using the new Logs Ingestion API
author:
- Wade Cline (@wtcline-intc) <wade.cline@intel.com>
- Sriramoju Vishal Bharath (@vsh47) <sriramoju.vishal.bharath@intel.com>
- Cyrus Li (@zhcli) <cyrus1006@gmail.com>
description:
- This callback plugin will post task results in JSON format to an Azure Log Analytics workspace using the new Logs Ingestion API.
version_added: "12.4.0"
requirements:
- The callback plugin has been enabled.
- An Azure Log Analytics workspace has been established.
- A Data Collection Rule (DCR) and custom table are created.
options:
dce_url:
description: URL of the Data Collection Endpoint (DCE) for Azure Logs Ingestion API.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_DCE_URL
ini:
- section: callback_loganalytics
key: dce_url
dcr_id:
description: Data Collection Rule (DCR) ID for the Azure Log Ingestion API.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_DCR_ID
ini:
- section: callback_loganalytics
key: dcr_id
disable_attempts:
description:
- When O(disable_on_failure=true), number of plugin failures that must occur before the plugin is disabled.
- This helps prevent outright plugin failure from a single, transient network issue.
type: int
default: 3
env:
- name: ANSIBLE_LOGANALYTICS_DISABLE_ATTEMPTS
ini:
- section: callback_loganalytics
key: disable_attempts
disable_on_failure:
description: Stop trying to send data on plugin failure.
type: bool
default: true
env:
- name: ANSIBLE_LOGANALYTICS_DISABLE_ON_FAILURE
ini:
- section: callback_loganalytics
key: disable_on_failure
client_id:
description: Client ID of the Azure App registration for OAuth2 authentication ("Modern Authentication").
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_CLIENT_ID
ini:
- section: callback_loganalytics
key: client_id
client_secret:
description: Client Secret of the Azure App registration.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_CLIENT_SECRET
ini:
- section: callback_loganalytics
key: client_secret
include_content:
description: Send the content to the Azure Log Analytics workspace.
type: bool
default: false
env:
- name: ANSIBLE_LOGANALYTICS_INCLUDE_CONTENT
ini:
- section: callback_loganalytics
key: include_content
include_task_args:
description: Send the task args to the Azure Log Analytics workspace.
type: bool
default: false
env:
- name: ANSIBLE_LOGANALYTICS_INCLUDE_TASK_ARGS
ini:
- section: callback_loganalytics
key: include_task_args
stream_name:
description: The name of the stream used to send the logs to the Azure Log Analytics workspace.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_STREAM_NAME
ini:
- section: callback_loganalytics
key: stream_name
tenant_id:
description: Tenant ID for the Azure Active Directory.
type: str
required: true
env:
- name: ANSIBLE_LOGANALYTICS_TENANT_ID
ini:
- section: callback_loganalytics
key: tenant_id
timeout:
description: Timeout for the HTTP requests to the Azure Log Analytics API.
type: int
default: 2
env:
- name: ANSIBLE_LOGANALYTICS_TIMEOUT
ini:
- section: callback_loganalytics
key: timeout
seealso:
- name: Logs Ingestion API
description: Overview of Logs Ingestion API in Azure Monitor
link: https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview
notes:
- Triple verbosity logging (C(-vvv)) can be used to generate JSON sample data for creating the table schema in Azure Log Analytics.
Search for the string C(Event Data:) in the output in order to locate the data sample.
"""
EXAMPLES = """
examples: |
Enable the plugin in ansible.cfg:
[defaults]
callback_enabled = community.general.loganalytics_ingestion
Set the environment variables:
export ANSIBLE_LOGANALYTICS_DCE_URL=https://my-dce.ingest.monitor.azure.com
export ANSIBLE_LOGANALYTICS_DCR_ID=dcr-xxxxxx
export ANSIBLE_LOGANALYTICS_CLIENT_ID=xxxxxxxx
export ANSIBLE_LOGANALYTICS_CLIENT_SECRET=xxxxxxxx
export ANSIBLE_LOGANALYTICS_TENANT_ID=xxxxxxxx
export ANSIBLE_LOGANALYTICS_STREAM_NAME=Custom-MyTable
"""
import getpass
import json
import socket
import uuid
from datetime import datetime, timedelta, timezone
from os.path import basename
from urllib.parse import urlencode
from ansible.module_utils.urls import open_url
from ansible.plugins.callback import CallbackBase
from ansible.utils.display import Display
display = Display()
class AzureLogAnalyticsIngestionSource:
def __init__(
self,
dce_url,
dcr_id,
disable_attempts,
disable_on_failure,
client_id,
client_secret,
tenant_id,
stream_name,
include_task_args,
include_content,
timeout,
fqcn,
):
self.dce_url = dce_url
self.dcr_id = dcr_id
self.disabled = False
self.disable_attempts = disable_attempts
self.disable_on_failure = disable_on_failure
self.client_id = client_id
self.client_secret = client_secret
self.failures = 0
self.tenant_id = tenant_id
self.stream_name = stream_name
self.include_task_args = include_task_args
self.include_content = include_content
self.token_expiration_time = None
self.session = str(uuid.uuid4())
self.host = socket.gethostname()
self.user = getpass.getuser()
self.timeout = timeout
self.fqcn = fqcn
self.bearer_token = self.get_bearer_token()
# OAuth2 authentication method to get a Bearer token
# This replaces the shared_key authentication mechanism
def get_bearer_token(self):
url = f"https://login.microsoftonline.com/{self.tenant_id}/oauth2/v2.0/token"
headers = {"Content-Type": "application/x-www-form-urlencoded"}
data = urlencode(
{
"grant_type": "client_credentials",
"client_id": self.client_id,
"client_secret": self.client_secret,
# The scope value comes from https://learn.microsoft.com/en-us/azure/azure-monitor/logs/logs-ingestion-api-overview#headers
# and https://learn.microsoft.com/en-us/entra/identity-platform/scopes-oidc#the-default-scope
"scope": "https://monitor.azure.com/.default",
}
)
response = open_url(url, data=data, force=True, headers=headers, method="POST", timeout=self.timeout)
j = json.loads(response.read().decode("utf-8"))
self.token_expiration_time = datetime.now() + timedelta(seconds=j.get("expires_in"))
return j.get("access_token")
def is_token_valid(self):
return datetime.now() + timedelta(seconds=10) < self.token_expiration_time
# Method to send event data to the Azure Logs Ingestion API
# This replaces the legacy API call and now uses the Logs Ingestion API endpoint
def send_event(self, event_data):
if not self.is_token_valid():
self.bearer_token = self.get_bearer_token()
ingestion_url = (
f"{self.dce_url}/dataCollectionRules/{self.dcr_id}/streams/{self.stream_name}?api-version=2023-01-01"
)
headers = {"Authorization": f"Bearer {self.bearer_token}", "Content-Type": "application/json"}
open_url(ingestion_url, data=json.dumps(event_data), headers=headers, method="POST", timeout=self.timeout)
def _rfc1123date(self):
return datetime.now(timezone.utc).strftime("%a, %d %b %Y %H:%M:%S GMT")
# This method wraps the private method with the appropriate error handling.
def send_to_loganalytics(self, playbook_name, result, state):
if self.disabled:
return
try:
self._send_to_loganalytics(playbook_name, result, state)
except Exception as e:
display.warning(f"{self.fqcn} callback plugin failure: {e}.")
if self.disable_on_failure:
self.failures += 1
if self.failures >= self.disable_attempts:
display.warning(
f"{self.fqcn} callback plugin failures exceed maximum of '{self.disable_attempts}'! Disabling plugin!"
)
self.disabled = True
else:
display.v(f"{self.fqcn} callback plugin failure {self.failures}/{self.disable_attempts}")
def _send_to_loganalytics(self, playbook_name, result, state):
ansible_role = str(result._task._role) if result._task._role else None
# Include/Exclude task args
if not self.include_task_args:
result._task_fields.pop("args", None)
# Include/Exclude content
if not self.include_content:
result._result.pop("content", None)
# Build the event data
event_data = [
{
"TimeGenerated": self._rfc1123date(),
"Host": result._host.name,
"User": self.user,
"Playbook": playbook_name,
"Role": ansible_role,
"TaskName": result._task.get_name(),
"Task": result._task_fields,
"Action": result._task_fields["action"],
"State": state,
"Result": result._result,
"Session": self.session,
}
]
# The data displayed here can be used as a sample file in order to create the table's schema.
display.vvv(f"Event Data: {json.dumps(event_data)}")
self.send_event(event_data)
class CallbackModule(CallbackBase):
CALLBACK_VERSION = 2.0
CALLBACK_TYPE = "notification"
CALLBACK_NAME = "loganalytics_ingestion"
CALLBACK_NEEDS_ENABLED = True
def __init__(self, display=None):
super().__init__(display=display)
self.start_datetimes = {}
self.playbook_name = None
self.azure_loganalytics = None
self.fqcn = f"community.general.{self.CALLBACK_NAME}"
def set_options(self, task_keys=None, var_options=None, direct=None):
super().set_options(task_keys=task_keys, var_options=var_options, direct=direct)
# Set options for the new Azure Logs Ingestion API configuration
self.client_id = self.get_option("client_id")
self.client_secret = self.get_option("client_secret")
self.dce_url = self.get_option("dce_url")
self.dcr_id = self.get_option("dcr_id")
self.disable_attempts = self.get_option("disable_attempts")
self.disable_on_failure = self.get_option("disable_on_failure")
self.include_content = self.get_option("include_content")
self.include_task_args = self.get_option("include_task_args")
self.stream_name = self.get_option("stream_name")
self.tenant_id = self.get_option("tenant_id")
self.timeout = self.get_option("timeout")
# Initialize the AzureLogAnalyticsIngestionSource with the new settings
self.azure_loganalytics = AzureLogAnalyticsIngestionSource(
self.dce_url,
self.dcr_id,
self.disable_attempts,
self.disable_on_failure,
self.client_id,
self.client_secret,
self.tenant_id,
self.stream_name,
self.include_task_args,
self.include_content,
self.timeout,
self.fqcn,
)
def v2_playbook_on_start(self, playbook):
self.playbook_name = basename(playbook._file_name)
# Build event data and send it to the Logs Ingestion API
def v2_runner_on_failed(self, result, **kwargs):
self.azure_loganalytics.send_to_loganalytics(self.playbook_name, result, "FAILED")
def v2_runner_on_ok(self, result, **kwargs):
self.azure_loganalytics.send_to_loganalytics(self.playbook_name, result, "OK")

View File

@@ -254,7 +254,7 @@ class CallbackModule(CallbackBase):
self._runtime(result),
)
def runner_on_async_failed(self, result, **kwargs):
def v2_runner_on_async_failed(self, result, **kwargs):
self.splunk.send_event(
self.url,
self.authtoken,

View File

@@ -0,0 +1,30 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2026 Christoph Fiehe <christoph.fiehe@gmail.com>
# Note that this doc fragment is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
class ModuleDocFragment:
# Use together with ansible.builtin.url and icinga2_argument_spec from
# ansible_collections.community.general.plugins.module_utils._icinga2
DOCUMENTATION = r"""
options:
url:
description:
- URL of the Icinga 2 REST API.
type: str
required: true
ca_path:
description:
- CA certificates bundle to use to verify the Icinga 2 server certificate.
type: path
timeout:
description:
- How long to wait for the server to send data before giving up.
type: int
default: 10
"""

View File

@@ -16,6 +16,14 @@ options:
description: A string containing an INI document.
type: string
required: true
delimiters:
description: A list of characters used as delimiters in the INI document.
type: list
elements: string
default:
- "="
- ":"
version_added: 12.4.0
seealso:
- plugin: community.general.to_ini
plugin_type: filter
@@ -53,13 +61,17 @@ from configparser import ConfigParser
from io import StringIO
from ansible.errors import AnsibleFilterError
from ansible.module_utils.common.collections import is_sequence
class IniParser(ConfigParser):
"""Implements a configparser which is able to return a dict"""
def __init__(self):
super().__init__(interpolation=None)
def __init__(self, delimiters=None):
if delimiters is None:
super().__init__(interpolation=None)
else:
super().__init__(interpolation=None, delimiters=delimiters)
self.optionxform = str
def as_dict(self):
@@ -74,13 +86,21 @@ class IniParser(ConfigParser):
return d
def from_ini(obj):
def from_ini(obj, delimiters=None):
"""Read the given string as INI file and return a dict"""
if not isinstance(obj, str):
raise AnsibleFilterError(f"from_ini requires a str, got {type(obj)}")
if delimiters is not None:
if not is_sequence(delimiters):
raise AnsibleFilterError(f"from_ini's delimiters parameter must be a sequence, got {type(delimiters)}")
delimiters = tuple(delimiters)
if not all(isinstance(elt, str) for elt in delimiters):
raise AnsibleFilterError(
f"from_ini's delimiters parameter must be a sequence of strings, got {delimiters!r}"
)
parser = IniParser()
parser = IniParser(delimiters=delimiters)
try:
parser.read_file(StringIO(obj))

View File

@@ -0,0 +1,127 @@
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2026 Christoph Fiehe <christoph.fiehe@gmail.com>
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import json
import typing as t
from ansible.module_utils.common.text.converters import to_bytes
from ansible.module_utils.urls import fetch_url, url_argument_spec
if t.TYPE_CHECKING:
from http.client import HTTPResponse
from urllib.error import HTTPError
from ansible.module_utils.basic import AnsibleModule
class Icinga2Client:
def __init__(
self,
module: AnsibleModule,
url: str,
ca_path: str | None = None,
timeout: int | float | None = None,
) -> None:
self.module = module
self.url = url.rstrip("/")
self.ca_path = ca_path
self.timeout = timeout
self.actions = Actions(client=self)
def send_request(
self, method: str, path: str, data: dict[str, t.Any] | None = None
) -> tuple[HTTPResponse | HTTPError, dict[str, t.Any]]:
url = f"{self.url}/{path}"
headers = {
"X-HTTP-Method-Override": method.upper(),
"Accept": "application/json",
}
return fetch_url(
module=self.module,
url=url,
ca_path=self.ca_path,
data=to_bytes(json.dumps(data)),
headers=headers,
timeout=self.timeout,
)
class Actions:
base_path = "v1/actions"
def __init__(self, client: Icinga2Client) -> None:
self.client = client
def schedule_downtime(
self,
object_type: str,
filter: str,
author: str,
comment: str,
start_time: int,
end_time: int,
duration: int,
filter_vars: dict[str, t.Any] | None = None,
fixed: bool | None = None,
all_services: bool | None = None,
trigger_name: str | None = None,
child_options: str | None = None,
) -> tuple[HTTPResponse | HTTPError, dict[str, t.Any]]:
path = f"{self.base_path}/schedule-downtime"
data: dict[str, t.Any] = {
"type": object_type,
"filter": filter,
"author": author,
"comment": comment,
"start_time": start_time,
"end_time": end_time,
"duration": duration,
}
if filter_vars is not None:
data["filter_vars"] = filter_vars
if fixed is not None:
data["fixed"] = fixed
if all_services is not None:
data["all_services"] = all_services
if trigger_name is not None:
data["trigger_name"] = trigger_name
if child_options is not None:
data["child_options"] = child_options
return self.client.send_request(method="POST", path=path, data=data)
def remove_downtime(
self,
object_type: str,
name: str | None = None,
filter: str | None = None,
filter_vars: dict[str, t.Any] | None = None,
) -> tuple[HTTPResponse | HTTPError, dict[str, t.Any]]:
path = f"{self.base_path}/remove-downtime"
data: dict[str, t.Any] = {"type": object_type}
if name is not None:
data[object_type.lower()] = name
if filter is not None:
data["filter"] = filter
if filter_vars is not None:
data["filter_vars"] = filter_vars
return self.client.send_request(method="POST", path=path, data=data)
def icinga2_argument_spec() -> dict[str, t.Any]:
argument_spec = url_argument_spec()
argument_spec.update(
url=dict(type="str", required=True),
ca_path=dict(type="path"),
timeout=dict(type="int", default=10),
)
return argument_spec

View File

@@ -0,0 +1,32 @@
# Copyright (c) Ansible Project
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# Note that this module util is **PRIVATE** to the collection. It can have breaking changes at any time.
# Do not use this from other collections or standalone plugins/modules!
from __future__ import annotations
import typing as t
def merge_settings_without_absent_nulls(
existing_settings: dict[str, t.Any], desired_settings: dict[str, t.Any]
) -> dict[str, t.Any]:
"""
Merges existing and desired settings into a new dictionary while excluding null values in desired settings that are absent in the existing settings.
This ensures idempotency by treating absent keys in existing settings and null values in desired settings as equivalent, preventing unnecessary updates.
Args:
existing_settings (dict): Dictionary representing the current settings in Keycloak
desired_settings (dict): Dictionary representing the desired settings
Returns:
dict: A new dictionary containing all entries from existing_settings and desired_settings,
excluding null values in desired_settings whose corresponding keys are not present in existing_settings
"""
existing = existing_settings or {}
desired = desired_settings or {}
return {**existing, **{k: v for k, v in desired.items() if v is not None or k in existing}}

View File

@@ -25,6 +25,9 @@ URL_REALMS = "{url}/admin/realms"
URL_REALM = "{url}/admin/realms/{realm}"
URL_REALM_KEYS_METADATA = "{url}/admin/realms/{realm}/keys"
URL_LOCALIZATIONS = "{url}/admin/realms/{realm}/localization/{locale}"
URL_LOCALIZATION = "{url}/admin/realms/{realm}/localization/{locale}/{key}"
URL_TOKEN = "{url}/realms/{realm}/protocol/openid-connect/token"
URL_CLIENT = "{url}/admin/realms/{realm}/clients/{id}"
URL_CLIENTS = "{url}/admin/realms/{realm}/clients"
@@ -386,7 +389,9 @@ class KeycloakAPI:
self.restheaders = connection_header
self.http_agent = self.module.params.get("http_agent")
def _request(self, url: str, method: str, data: str | bytes | None = None):
def _request(
self, url: str, method: str, data: str | bytes | None = None, *, extra_headers: dict[str, str] | None = None
):
"""Makes a request to Keycloak and returns the raw response.
If a 401 is returned, attempts to re-authenticate
using first the module's refresh_token (if provided)
@@ -397,17 +402,18 @@ class KeycloakAPI:
:param url: request path
:param method: request method (e.g., 'GET', 'POST', etc.)
:param data: (optional) data for request
:param extra_headers headers to be sent with request, defaults to self.restheaders
:return: raw API response
"""
def make_request_catching_401() -> object | HTTPError:
def make_request_catching_401(headers: dict[str, str]) -> object | HTTPError:
try:
return open_url(
url,
method=method,
data=data,
http_agent=self.http_agent,
headers=self.restheaders,
headers=headers,
timeout=self.connection_timeout,
validate_certs=self.validate_certs,
)
@@ -416,7 +422,12 @@ class KeycloakAPI:
raise e
return e
r = make_request_catching_401()
headers = self.restheaders
if extra_headers is not None:
headers = headers.copy()
headers.update(extra_headers)
r = make_request_catching_401(headers)
if isinstance(r, Exception):
# Try to refresh token and retry, if available
@@ -426,7 +437,7 @@ class KeycloakAPI:
token = _request_token_using_refresh_token(self.module.params)
self.restheaders["Authorization"] = f"Bearer {token}"
r = make_request_catching_401()
r = make_request_catching_401(headers)
except KeycloakError as e:
# Token refresh returns 400 if token is expired/invalid, so continue on if we get a 400
if e.authError is not None and e.authError.code != 400: # type: ignore # TODO!
@@ -440,7 +451,7 @@ class KeycloakAPI:
token = _request_token_using_credentials(self.module.params)
self.restheaders["Authorization"] = f"Bearer {token}"
r = make_request_catching_401()
r = make_request_catching_401(headers)
if isinstance(r, Exception):
# Try to re-auth with client_id and client_secret, if available
@@ -451,7 +462,7 @@ class KeycloakAPI:
token = _request_token_using_client_credentials(self.module.params)
self.restheaders["Authorization"] = f"Bearer {token}"
r = make_request_catching_401()
r = make_request_catching_401(headers)
except KeycloakError as e:
# Token refresh returns 400 if token is expired/invalid, so continue on if we get a 400
if e.authError is not None and e.authError.code != 400: # type: ignore # TODO!
@@ -590,6 +601,78 @@ class KeycloakAPI:
except Exception as e:
self.fail_request(e, msg=f"Could not delete realm {realm}: {e}", exception=traceback.format_exc())
def get_localization_values(self, locale: str, realm: str = "master") -> dict[str, str]:
"""
Get all localization overrides for a given realm and locale.
:param locale: Locale code (for example, 'en', 'fi', 'de').
:param realm: Realm name. Defaults to 'master'.
:return: Mapping of localization keys to override values.
:raise KeycloakError: Wrapped HTTP/JSON error with context
"""
realm_url = URL_LOCALIZATIONS.format(url=self.baseurl, realm=realm, locale=locale)
try:
return self._request_and_deserialize(realm_url, method="GET")
except Exception as e:
self.fail_request(
e,
msg=f"Could not read localization overrides for realm {realm}, locale {locale}: {e}",
exception=traceback.format_exc(),
)
def set_localization_value(self, locale: str, key: str, value: str, realm: str = "master"):
"""
Create or update a single localization override for the given key.
:param locale: Locale code (for example, 'en').
:param key: Localization message key to set.
:param value: Override value to set.
:param realm: Realm name. Defaults to 'master'.
:return: HTTPResponse: Response object on success.
:raise KeycloakError: Wrapped HTTP error with context
"""
realm_url = URL_LOCALIZATION.format(url=self.baseurl, realm=realm, locale=locale, key=key)
headers = {}
headers["Content-Type"] = "text/plain; charset=utf-8"
try:
return self._request(realm_url, method="PUT", data=to_native(value), extra_headers=headers)
except Exception as e:
self.fail_request(
e,
msg=f"Could not set localization value in realm {realm}, locale {locale}: {key}={value}: {e}",
exception=traceback.format_exc(),
)
def delete_localization_value(self, locale: str, key: str, realm: str = "master"):
"""
Delete a single localization override key for the given locale.
:param locale: Locale code (for example, 'en').
:param key: Localization message key to delete.
:param realm: Realm name. Defaults to 'master'.
:return: HTTPResponse: Response object on success.
:raise KeycloakError: Wrapped HTTP error with context
"""
realm_url = URL_LOCALIZATION.format(url=self.baseurl, realm=realm, locale=locale, key=key)
try:
return self._request(realm_url, method="DELETE")
except Exception as e:
self.fail_request(
e,
msg=f"Could not delete localization value in realm {realm}, locale {locale}, key {key}: {e}",
exception=traceback.format_exc(),
)
def get_clients(self, realm: str = "master", filter=None):
"""Obtains client representations for clients in a realm
@@ -998,7 +1081,7 @@ class KeycloakAPI:
:param realm: Realm in which the user resides; default 'master'
"""
users_url = URL_USERS.format(url=self.baseurl, realm=realm)
users_url += f"?username={username}&exact=true"
users_url += f"?username={quote(username, safe='')}&exact=true"
try:
userrep = None
users = self._request_and_deserialize(users_url, method="GET")
@@ -1637,9 +1720,8 @@ class KeycloakAPI:
def get_group_by_name(self, name, realm: str = "master", parents=None):
"""Fetch a keycloak group within a realm based on its name.
The Keycloak API does not allow filtering of the Groups resource by name.
As a result, this method first retrieves the entire list of groups - name and ID -
then performs a second query to fetch the group.
Uses the Keycloak search API with exact matching for efficient lookup
instead of fetching all groups.
If the group does not exist, None is returned.
:param name: Name of the group to fetch.
@@ -1653,11 +1735,21 @@ class KeycloakAPI:
if not parent:
return None
all_groups = self.get_subgroups(parent, realm)
# For subgroups: use children endpoint with search parameter
search_url = "{url}?search={name}&exact=true".format(
url=URL_GROUP_CHILDREN.format(url=self.baseurl, realm=realm, groupid=parent["id"]),
name=quote(name, safe=""),
)
else:
all_groups = self.get_groups(realm=realm)
# For top-level groups: use groups endpoint with search parameter
search_url = "{url}?search={name}&exact=true".format(
url=URL_GROUPS.format(url=self.baseurl, realm=realm), name=quote(name, safe="")
)
for group in all_groups:
groups = self._request_and_deserialize(search_url, method="GET")
# exact=true should return only exact matches, but verify the name
for group in groups:
if group["name"] == name:
return self.get_group_by_groupid(group["id"], realm=realm)
@@ -3018,7 +3110,7 @@ class KeycloakAPI:
def get_authz_permission_by_name(self, name, client_id, realm):
"""Get authorization permission by name"""
url = URL_AUTHZ_POLICIES.format(url=self.baseurl, client_id=client_id, realm=realm)
search_url = f"{url}/search?name={name.replace(' ', '%20')}"
search_url = f"{url}/search?name={quote(name, safe='')}"
try:
return self._request_and_deserialize(search_url, method="GET")
@@ -3064,7 +3156,7 @@ class KeycloakAPI:
def get_authz_resource_by_name(self, name, client_id, realm):
"""Get authorization resource by name"""
url = URL_AUTHZ_RESOURCES.format(url=self.baseurl, client_id=client_id, realm=realm)
search_url = f"{url}/search?name={name.replace(' ', '%20')}"
search_url = f"{url}/search?name={quote(name, safe='')}"
try:
return self._request_and_deserialize(search_url, method="GET")
@@ -3074,7 +3166,7 @@ class KeycloakAPI:
def get_authz_policy_by_name(self, name, client_id, realm):
"""Get authorization policy by name"""
url = URL_AUTHZ_POLICIES.format(url=self.baseurl, client_id=client_id, realm=realm)
search_url = f"{url}/search?name={name.replace(' ', '%20')}"
search_url = f"{url}/search?name={quote(name, safe='')}"
try:
return self._request_and_deserialize(search_url, method="GET")

View File

@@ -6,9 +6,15 @@
from __future__ import annotations
import traceback
from contextlib import contextmanager
from functools import wraps
from ansible_collections.community.general.plugins.module_utils.mh.exceptions import ModuleHelperException
from ansible_collections.community.general.plugins.module_utils.mh.exceptions import (
ModuleHelperException,
_UnhandledSentinel,
)
_unhandled_exceptions: tuple[type[Exception], ...] = (_UnhandledSentinel,)
def cause_changes(when=None):
@@ -32,6 +38,17 @@ def cause_changes(when=None):
return deco
@contextmanager
def no_handle_exceptions(*exceptions: type[Exception]):
global _unhandled_exceptions
current = _unhandled_exceptions
_unhandled_exceptions = tuple(exceptions)
try:
yield
finally:
_unhandled_exceptions = current
def module_fails_on_exception(func):
conflict_list = ("msg", "exception", "output", "vars", "changed")
@@ -46,6 +63,9 @@ def module_fails_on_exception(func):
try:
func(self, *args, **kwargs)
except _unhandled_exceptions:
# re-raise exception without further processing
raise
except ModuleHelperException as e:
if e.update_output:
self.update_output(e.update_output)

View File

@@ -15,3 +15,7 @@ class ModuleHelperException(Exception):
update_output = {}
self.update_output: dict[str, t.Any] = update_output
super().__init__(*args)
class _UnhandledSentinel(Exception):
pass

View File

@@ -661,17 +661,37 @@ class RedfishUtils:
:return: dict containing the status of the service
"""
result = {}
service_root_data = {}
# Get these entries, but does not fail if not found
properties = [
"Id",
"Name",
"RedfishVersion",
"Vendor",
"ServiceIdentification",
"ProtocolFeaturesSupported",
"UUID",
]
# Get the service root
# Override the timeout since the service root is expected to be readily
# available.
# Override the timeout since the service root is expected to be readily available.
service_root = self.get_request(self.root_uri + self.service_root, timeout=10)
if service_root["ret"] is False:
# Failed, either due to a timeout or HTTP error; not available
return {"ret": True, "available": False}
# Successfully accessed the service root; available
return {"ret": True, "available": True}
result["ret"] = True
result["available"] = True
data = service_root["data"]
for property in properties:
if property in data:
service_root_data[property] = data[property]
result["entries"] = service_root_data
return result
def get_logs(self):
log_svcs_uri_list = []

View File

@@ -50,12 +50,14 @@ options:
is run.
type: str
includepkgs:
description: List of packages to include.
description:
- List of packages to include in all operations. Inverse of O(excludepkgs), DNF will exclude any package in
the repository that does not match this list. Matches a name or a glob.
type: list
elements: str
version_added: 9.4.0
excludepkgs:
description: List of packages to exclude.
description: List of packages in this repository to exclude from all operations. Matches a name or a glob.
type: list
elements: str
version_added: 9.4.0
@@ -74,12 +76,14 @@ EXAMPLES = r"""
state: absent
name: '@copr/integration_tests'
- name: Install Caddy
- name: Install a repo where only packages starting with "python" that do not have i386 are seen by DNF
community.general.copr:
name: '@caddy/caddy'
name: '@sample/repo'
chroot: fedora-rawhide-{{ ansible_facts.architecture }}
includepkgs:
- caddy
- 'python*'
excludepkgs:
- '*.i386'
"""
RETURN = r"""

View File

@@ -0,0 +1,309 @@
#!/usr/bin/python
#
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
# SPDX-FileCopyrightText: 2026 Christoph Fiehe <christoph.fiehe@gmail.com>
from __future__ import annotations
DOCUMENTATION = r"""
module: icinga2_downtime
short_description: Manages Icinga 2 downtimes
version_added: "12.4.0"
description:
- Manages downtimes in Icinga 2 through its REST API.
- Options as described at U(https://icinga.com/docs/icinga-2/latest/doc/12-icinga2-api/#schedule-downtime).
author:
- Christoph Fiehe (@cfiehe)
attributes:
check_mode:
support: none
details:
- In case of a complex filter expression, it may become very complex to decide
whether downtime creation or removal will succeed and trigger a change.
diff_mode:
support: none
options:
all_services:
description:
- Whether downtimes should be created for all services of the matched host objects.
- If omitted, Icinga 2 does not create downtimes for all services of the matched host objects by default.
type: bool
author:
description:
- Name of the author.
type: str
default: "Ansible"
comment:
description:
- A descriptive comment.
type: str
default: Downtime scheduled by Ansible
child_options:
description:
- Schedule child downtimes.
type: str
choices: ["DowntimeNoChildren", "DowntimeTriggeredChildren", "DowntimeNonTriggeredChildren"]
duration:
description:
- Duration of the downtime.
- Required in case of a flexible downtime.
type: int
end_time:
description:
- End time of the downtime as UNIX timestamp.
type: int
filter_vars:
description:
- Variable names and values used in the filter expression.
type: dict
filter:
description:
- Filter expression limiting the objects to operate on.
type: str
fixed:
description:
- Whether the downtime is fixed or flexible.
- If omitted, Icinga 2 creates a fixed downtime by default.
type: bool
name:
description:
- Name of the downtime object.
- This option has no effect for states other than V(absent).
type: str
object_type:
description:
- Use V(Host) for a host downtime and V(Service) for a service downtime.
- Use V(Downtime) and give the name of the downtime object you want to remove.
type: str
choices: ["Service", "Host", "Downtime"]
default: Host
start_time:
description:
- Start time of the downtime as UNIX timestamp.
type: int
state:
description:
- State of the downtime.
type: str
choices: ["present", "absent"]
default: present
trigger_name:
description:
- Name of the downtime trigger.
type: str
extends_documentation_fragment:
- community.general._icinga2_api
- community.general.attributes
- ansible.builtin.url
"""
EXAMPLES = r"""
- name: Schedule a host downtime
community.general.icinga2_downtime:
url: "https://icinga2.example.com:5665"
url_username: icingadmin
url_password: secret
state: present
author: Ansible
comment: Scheduled downtime for test purposes.
all_services: true
start_time: "{{ downtime_start_time }}"
end_time: "{{ downtime_end_time }}"
duration: "{{ downtime_duration }}"
fixed: true
object_type: Host
filter: host.name=="host.example.com"
delegate_to: localhost
register: icinga2_downtime_response
vars:
downtime_start_time: "{{ ansible_date_time['epoch'] | int }}"
downtime_end_time: "{{ downtime_start_time | int + 3600 }}"
downtime_duration: "{{ downtime_end_time | int - downtime_start_time | int }}"
- name: Remove scheduled host downtime
community.general.icinga2_downtime:
url: "https://icinga2.example.com:5665"
url_username: icingadmin
url_password: secret
state: absent
author: Ansible
object_type: Downtime
name: "{{ icinga2_downtime_response.results[0].name }}"
delegate_to: localhost
when: icinga2_downtime_response.results | default([]) | length > 0
"""
RETURN = r"""
# Returns the results of downtime scheduling as a list of JSON dictionaries from the Icinga 2 API under the C(results) key.
# Refer to https://icinga.com/docs/icinga-2/latest/doc/12-icinga2-api/#schedule-downtime for more details.
results:
description: Results of downtime scheduling or removal
type: list
returned: success
elements: dict
contains:
code:
description: Success or error code of downtime scheduling.
returned: always
type: int
sample: 200
legacy_id:
description: Legacy id of the downtime object.
returned: if a downtime was scheduled successfully
type: int
sample: 28911
name:
description: Name of the downtime object.
returned: if a downtime was scheduled successfully
type: str
sample: host.example.com!e19c705a-54c2-49c5-8014-70ff624f9e51
status:
description: Human-readable message describing the result of downtime scheduling.
returned: always
type: str
sample: Successfully scheduled downtime 'host.example.com!e19c705a-54c2-49c5-8014-70ff624f9e51' for object 'host.example.com'.
sample:
[
{
"code": 200,
"legacy_id": 28911,
"name": "host.example.com!e19c705a-54c2-49c5-8014-70ff624f9e51",
"status": "Successfully scheduled downtime 'host.example.com!e19c705a-54c2-49c5-8014-70ff624f9e51' for object 'host.example.com'.",
}
]
error:
description: Error message as JSON dictionary returned from the Icinga 2 API.
type: dict
returned: if downtime scheduling or removal did not succeed
sample:
{
"error": 404,
"status": "No objects found."
}
"""
import json
from contextlib import suppress
from ansible_collections.community.general.plugins.module_utils._icinga2 import (
Icinga2Client,
icinga2_argument_spec,
)
from ansible_collections.community.general.plugins.module_utils.module_helper import StateModuleHelper
class Icinga2Downtime(StateModuleHelper):
argument_spec = icinga2_argument_spec()
argument_spec.update(
all_services=dict(type="bool"),
author=dict(type="str", default="Ansible"),
comment=dict(type="str", default="Downtime scheduled by Ansible"),
child_options=dict(
type="str",
choices=[
"DowntimeNoChildren",
"DowntimeTriggeredChildren",
"DowntimeNonTriggeredChildren",
],
),
duration=dict(type="int"),
end_time=dict(type="int"),
filter_vars=dict(type="dict"),
filter=dict(type="str"),
fixed=dict(type="bool"),
name=dict(type="str"),
object_type=dict(type="str", choices=["Service", "Host", "Downtime"], default="Host"),
start_time=dict(type="int"),
state=dict(type="str", choices=["present", "absent"], default="present"),
trigger_name=dict(type="str"),
)
module = dict(
argument_spec=argument_spec,
supports_check_mode=False,
required_if=(
(
"state",
"present",
["comment", "start_time", "end_time", "filter"],
),
("fixed", False, ["duration"]),
),
required_one_of=[["filter", "name"]],
)
def __init_module__(self) -> None:
self.client = Icinga2Client(
module=self.module, # type:ignore[arg-type]
url=self.vars.url,
ca_path=self.vars.ca_path,
timeout=self.vars.timeout,
)
def state_present(self) -> None:
duration = self.vars.duration
end_time = self.vars.end_time
start_time = self.vars.start_time
if end_time <= start_time:
self.do_raise(msg="The end time must be later than the start time.")
if duration is None:
duration = end_time - start_time
response, info = self.client.actions.schedule_downtime(
all_services=self.vars.all_services,
author=self.vars.author,
child_options=self.vars.child_options,
comment=self.vars.comment,
duration=duration,
end_time=end_time,
filter_vars=self.vars.filter_vars,
filter=self.vars.filter,
fixed=self.vars.fixed,
object_type=self.vars.object_type,
start_time=start_time,
trigger_name=self.vars.trigger_name,
)
status_code = info["status"]
if 200 <= status_code <= 299:
self.vars.set("results", json.loads(response.read())["results"], output=True)
self.vars.msg = "Successfully scheduled downtime."
self.changed = True
elif status_code >= 400:
with suppress(KeyError, ValueError):
self.vars.set("error", json.loads(info["body"])) # type:ignore[arg-type]
self.do_raise(msg="Unable to schedule downtime.")
def state_absent(self) -> None:
response, info = self.client.actions.remove_downtime(
filter_vars=self.vars.filter_vars,
filter=self.vars.filter,
name=self.vars.name,
object_type=self.vars.object_type,
)
status_code = info["status"]
if 200 <= status_code <= 299:
self.vars.set("results", json.loads(response.read())["results"], output=True)
self.vars.msg = "Successfully removed downtime."
self.changed = True
elif status_code == 404:
self.vars.msg = "No matching downtime object found."
elif status_code >= 400:
with suppress(KeyError, ValueError):
self.vars.set("error", json.loads(info["body"])) # type:ignore[arg-type]
self.do_raise(msg="Unable to remove downtime.")
def main():
Icinga2Downtime.execute()
if __name__ == "__main__":
main()

View File

@@ -11,6 +11,12 @@ module: jboss
short_description: Deploy applications to JBoss
description:
- Deploy applications to JBoss standalone using the filesystem.
deprecated:
removed_in: 14.0.0
why: The module has not been very actively maintained and there is a better alternative.
alternative: >-
Use the C(middleware_automation.wildfly.wildfly_app_deploy) role to deploy applications in JBoss or WildFly.
See U(https://galaxy.ansible.com/ui/repo/published/middleware_automation/wildfly/content/role/wildfly_app_deploy/) for details.
extends_documentation_fragment:
- community.general.attributes
attributes:

View File

@@ -151,6 +151,17 @@ options:
type: list
elements: str
valid_post_logout_redirect_uris:
description:
- Valid post logout redirect URIs for this client.
- This is stored as C(post.logout.redirect.uris) in the client attributes.
- Use V(+) as a single list element to allow all redirect URIs.
aliases:
- postLogoutRedirectUris
type: list
elements: str
version_added: "12.4.0"
not_before:
description:
- Revoke any tokens issued before this date for this client (this is a UNIX timestamp). This is C(notBefore) in the
@@ -227,6 +238,15 @@ options:
- frontchannelLogout
type: bool
backchannel_logout_url:
description:
- URL that will cause the client to log itself out when a logout request is sent to this realm.
- This is stored as C(backchannel.logout.url) in the client attributes.
aliases:
- backchannelLogoutUrl
type: str
version_added: "12.4.0"
protocol:
description:
- Type of client.
@@ -748,6 +768,9 @@ import copy
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.identity.keycloak._keycloak_utils import (
merge_settings_without_absent_nulls,
)
from ansible_collections.community.general.plugins.module_utils.identity.keycloak.keycloak import (
KeycloakAPI,
KeycloakError,
@@ -761,6 +784,21 @@ PROTOCOL_SAML = "saml"
PROTOCOL_DOCKER_V2 = "docker-v2"
CLIENT_META_DATA = ["authorizationServicesEnabled"]
# Parameters that map to client attributes rather than top-level API fields.
# Each entry maps the module parameter name to (attribute_key, transform_fn).
# transform_fn converts the module param value to the attribute string value.
# Use None for transform_fn when no transformation is needed (identity).
ATTRIBUTE_PARAMS = {
"valid_post_logout_redirect_uris": (
"post.logout.redirect.uris",
"##".join,
),
"backchannel_logout_url": (
"backchannel.logout.url",
None,
),
}
def normalise_scopes_for_behavior(desired_client, before_client, clientScopesBehavior):
"""
@@ -1219,6 +1257,7 @@ def main():
default_roles=dict(type="list", elements="str", aliases=["defaultRoles"]),
redirect_uris=dict(type="list", elements="str", aliases=["redirectUris"]),
web_origins=dict(type="list", elements="str", aliases=["webOrigins"]),
valid_post_logout_redirect_uris=dict(type="list", elements="str", aliases=["postLogoutRedirectUris"]),
not_before=dict(type="int", aliases=["notBefore"]),
bearer_only=dict(type="bool", aliases=["bearerOnly"]),
consent_required=dict(type="bool", aliases=["consentRequired"]),
@@ -1229,6 +1268,7 @@ def main():
authorization_services_enabled=dict(type="bool", aliases=["authorizationServicesEnabled"]),
public_client=dict(type="bool", aliases=["publicClient"]),
frontchannel_logout=dict(type="bool", aliases=["frontchannelLogout"]),
backchannel_logout_url=dict(type="str", aliases=["backchannelLogoutUrl"]),
protocol=dict(type="str", choices=[PROTOCOL_OPENID_CONNECT, PROTOCOL_SAML, PROTOCOL_DOCKER_V2]),
attributes=dict(type="dict"),
full_scope_allowed=dict(type="bool", aliases=["fullScopeAllowed"]),
@@ -1308,28 +1348,48 @@ def main():
# Build a proposed changeset from parameters given to this module
changeset = {}
# Collect attribute-mapped parameters to inject into attributes later
attribute_overrides = {}
for param_name, (attr_key, transform_fn) in ATTRIBUTE_PARAMS.items():
param_value = module.params.get(param_name)
if param_value is not None:
attribute_overrides[attr_key] = transform_fn(param_value) if transform_fn else param_value
for client_param in client_params:
new_param_value = module.params.get(client_param)
# Skip attribute-mapped params; they are handled via attributes
if client_param in ATTRIBUTE_PARAMS:
continue
# Unfortunately, the ansible argument spec checker introduces variables with null values when
# they are not specified
if client_param == "protocol_mappers":
new_param_value = [{k: v for k, v in x.items() if v is not None} for x in new_param_value]
elif client_param == "authentication_flow_binding_overrides":
new_param_value = flow_binding_from_dict_to_model(new_param_value, realm, kc)
elif client_param == "attributes" and "attributes" in before_client:
attributes_copy = copy.deepcopy(before_client["attributes"])
# Merge client attributes while excluding null-valued attributes that are not present in Keycloak's response.
# This ensures idempotency by treating absent attributes and null attributes as equivalent.
attributes_copy.update(
{key: value for key, value in new_param_value.items() if value is not None or key in attributes_copy}
desired_flow_binding_overrides = flow_binding_from_dict_to_model(new_param_value, realm, kc)
existing_flow_binding_overrides = before_client.get("authenticationFlowBindingOverrides")
# ensures idempotency
new_param_value = merge_settings_without_absent_nulls(
existing_flow_binding_overrides, desired_flow_binding_overrides
)
new_param_value = attributes_copy
elif client_param == "attributes" and "attributes" in before_client:
desired_attributes = new_param_value
existing_attributes = copy.deepcopy(before_client["attributes"])
# ensures idempotency
new_param_value = merge_settings_without_absent_nulls(existing_attributes, desired_attributes)
elif client_param in ["clientScopesBehavior", "client_scopes_behavior"]:
continue
changeset[camel(client_param)] = new_param_value
# Inject attribute-mapped parameters into the attributes dict
if attribute_overrides:
if "attributes" not in changeset:
changeset["attributes"] = copy.deepcopy(before_client.get("attributes", {}))
if isinstance(changeset["attributes"], dict):
changeset["attributes"].update(attribute_overrides)
# Prepare the desired values using the existing values (non-existence results in a dict that is save to use as a basis)
desired_client = copy.deepcopy(before_client)
desired_client.update(changeset)
@@ -1393,7 +1453,7 @@ def main():
if module.check_mode:
result["end_state"] = sanitize_cr(desired_client_with_scopes)
if module._diff:
result["diff"] = dict(before=sanitize_cr(before_client), after=sanitize_cr(desired_client))
result["diff"] = dict(before=sanitize_cr(before_norm), after=sanitize_cr(desired_norm))
module.exit_json(**result)
# do the update

View File

@@ -368,6 +368,35 @@ EXAMPLES = r"""
attribute.friendly.name: User Roles
attribute.name: roles
syncMode: INHERIT
- name: Create OIDC identity provider, authentication with credentials and advanced claim to group
community.general.keycloak_identity_provider:
state: present
auth_keycloak_url: https://auth.example.com/auth
auth_realm: master
auth_username: admin
auth_password: admin
realm: myrealm
alias: oidc-idp
display_name: OpenID Connect IdP
enabled: true
provider_id: oidc
config:
issuer: https://idp.example.com
authorizationUrl: https://idp.example.com/auth
tokenUrl: https://idp.example.com/token
userInfoUrl: https://idp.example.com/userinfo
clientAuthMethod: client_secret_post
clientId: my-client
clientSecret: secret
syncMode: FORCE
mappers:
- name: group_name
identityProviderMapper: oidc-advanced-group-idp-mapper
config:
claims: '[{"key":"my_key","value":"my_value"}]'
group: group_name
syncMode: INHERIT
"""
RETURN = r"""

View File

@@ -291,6 +291,13 @@ options:
aliases:
- internationalizationEnabled
type: bool
localization_texts:
description:
- The custom localization texts for a realm.
aliases:
- localizationTexts
type: dict
version_added: 12.4.0
login_theme:
description:
- The realm login theme.
@@ -917,6 +924,7 @@ def main():
events_listeners=dict(type="list", elements="str", aliases=["eventsListeners"]),
failure_factor=dict(type="int", aliases=["failureFactor"]),
internationalization_enabled=dict(type="bool", aliases=["internationalizationEnabled"]),
localization_texts=dict(type="dict", aliases=["localizationTexts"]),
login_theme=dict(type="str", aliases=["loginTheme"]),
login_with_email_allowed=dict(type="bool", aliases=["loginWithEmailAllowed"]),
max_delta_time_seconds=dict(type="int", aliases=["maxDeltaTimeSeconds"]),

View File

@@ -64,7 +64,23 @@ options:
description:
- The name of the "provider ID" for the key.
- The value V(rsa-enc) has been added in community.general 8.2.0.
choices: ['rsa', 'rsa-enc']
- The value V(java-keystore) has been added in community.general 12.4.0. This provider imports keys from
a Java Keystore (JKS or PKCS12) file located on the Keycloak server filesystem.
- The values V(rsa-generated), V(hmac-generated), V(aes-generated), and V(ecdsa-generated) have been added in
community.general 12.4.0. These are auto-generated key providers where Keycloak manages the key material.
- The values V(rsa-enc-generated), V(ecdh-generated), and V(eddsa-generated) have been added in
community.general 12.4.0. These correspond to the auto-generated key providers available in Keycloak 26.
choices:
- rsa
- rsa-enc
- java-keystore
- rsa-generated
- rsa-enc-generated
- hmac-generated
- aes-generated
- ecdsa-generated
- ecdh-generated
- eddsa-generated
default: 'rsa'
type: str
config:
@@ -94,15 +110,48 @@ options:
- Key algorithm.
- The values V(RS384), V(RS512), V(PS256), V(PS384), V(PS512), V(RSA1_5), V(RSA-OAEP), V(RSA-OAEP-256) have been
added in community.general 8.2.0.
- The values V(HS256), V(HS384), V(HS512) (for HMAC), V(ES256), V(ES384), V(ES512) (for ECDSA), and V(AES)
have been added in community.general 12.4.0.
- The values V(ECDH_ES), V(ECDH_ES_A128KW), V(ECDH_ES_A192KW), V(ECDH_ES_A256KW) (for ECDH key exchange),
and V(Ed25519), V(Ed448) (for EdDSA signing) have been added in community.general 12.4.0.
- For O(provider_id=rsa), O(provider_id=rsa-generated), and O(provider_id=java-keystore), defaults to V(RS256).
- For O(provider_id=rsa-enc) and O(provider_id=rsa-enc-generated), must be one of V(RSA1_5), V(RSA-OAEP), V(RSA-OAEP-256) (required, no default).
- For O(provider_id=hmac-generated), must be one of V(HS256), V(HS384), V(HS512) (required, no default).
- For O(provider_id=ecdsa-generated), must be one of V(ES256), V(ES384), V(ES512) (required, no default).
- For O(provider_id=ecdh-generated), must be one of V(ECDH_ES), V(ECDH_ES_A128KW), V(ECDH_ES_A192KW), V(ECDH_ES_A256KW) (required, no default).
- For O(provider_id=eddsa-generated), this option is not used (the algorithm is determined by O(config.elliptic_curve)).
- For O(provider_id=aes-generated), this option is not used (AES is always used).
choices:
- RS256
- RS384
- RS512
- PS256
- PS384
- PS512
- RSA1_5
- RSA-OAEP
- RSA-OAEP-256
- HS256
- HS384
- HS512
- ES256
- ES384
- ES512
- AES
- ECDH_ES
- ECDH_ES_A128KW
- ECDH_ES_A192KW
- ECDH_ES_A256KW
- Ed25519
- Ed448
default: RS256
choices: ['RS256', 'RS384', 'RS512', 'PS256', 'PS384', 'PS512', 'RSA1_5', 'RSA-OAEP', 'RSA-OAEP-256']
type: str
private_key:
description:
- The private key as an ASCII string. Contents of the key must match O(config.algorithm) and O(provider_id).
- Please note that the module cannot detect whether the private key specified differs from the current state's private
key. Use O(force=true) to force the module to update the private key if you expect it to be updated.
required: true
- Required when O(provider_id) is V(rsa) or V(rsa-enc). Not used for auto-generated providers.
type: str
certificate:
description:
@@ -110,8 +159,71 @@ options:
and O(provider_id).
- If you want Keycloak to automatically generate a certificate using your private key then set this to an empty
string.
required: true
- Required when O(provider_id) is V(rsa) or V(rsa-enc). Not used for auto-generated providers.
type: str
secret_size:
description:
- The size of the generated secret key in bytes.
- Only applicable to O(provider_id=hmac-generated) and O(provider_id=aes-generated).
- Valid values are V(16), V(24), V(32), V(64), V(128), V(256), V(512).
- Default is V(64) for HMAC, V(16) for AES.
type: int
version_added: 12.4.0
key_size:
description:
- The size of the generated key in bits.
- Only applicable to O(provider_id=rsa-generated) and O(provider_id=rsa-enc-generated).
- Valid values are V(1024), V(2048), V(4096). Default is V(2048).
type: int
version_added: 12.4.0
elliptic_curve:
description:
- The elliptic curve to use for ECDSA, ECDH, or EdDSA keys.
- For O(provider_id=ecdsa-generated) and O(provider_id=ecdh-generated), valid values are V(P-256), V(P-384), V(P-521). Default is V(P-256).
- For O(provider_id=eddsa-generated), valid values are V(Ed25519), V(Ed448). Default is V(Ed25519).
type: str
choices: ['P-256', 'P-384', 'P-521', 'Ed25519', 'Ed448']
version_added: 12.4.0
keystore:
description:
- Path to the Java Keystore file on the Keycloak server filesystem.
- Required when O(provider_id=java-keystore).
type: str
version_added: 12.4.0
keystore_password:
description:
- Password for the Java Keystore.
- Required when O(provider_id=java-keystore).
type: str
version_added: 12.4.0
key_alias:
description:
- Alias of the key within the keystore.
- Required when O(provider_id=java-keystore).
type: str
version_added: 12.4.0
key_password:
description:
- Password for the key within the keystore.
- If not specified, the O(config.keystore_password) is used.
- Only applicable to O(provider_id=java-keystore).
type: str
version_added: 12.4.0
update_password:
description:
- Controls when passwords are sent to Keycloak for V(java-keystore) provider.
- V(always) - Always send passwords. Keycloak will update the component even if passwords
have not changed. Use when you need to ensure passwords are updated.
- V(on_create) - Only send passwords when creating a new component. When updating an
existing component, send the masked value to preserve existing passwords. This makes
the module idempotent for password fields.
- This is necessary because Keycloak masks passwords in API responses (returns C(**********)),
making comparison impossible.
- Has no effect for providers other than V(java-keystore).
type: str
choices: ['always', 'on_create']
default: always
version_added: 12.4.0
notes:
- Current value of the private key cannot be fetched from Keycloak. Therefore comparing its desired state to the current
state is not possible.
@@ -119,6 +231,12 @@ notes:
state of the certificate to the desired state (which may be empty) is not possible.
- Due to the private key and certificate options the module is B(not fully idempotent). You can use O(force=true) to force
the module to ensure updating if you know that the private key might have changed.
- For auto-generated providers (V(rsa-generated), V(rsa-enc-generated), V(hmac-generated), V(aes-generated), V(ecdsa-generated),
V(ecdh-generated), V(eddsa-generated)), Keycloak manages the key material automatically. The O(config.private_key) and
O(config.certificate) options are not used.
- For V(java-keystore) provider, the O(config.keystore_password) and O(config.key_password) values are returned masked by
Keycloak. Therefore comparing their current state to the desired state is not possible. Use O(update_password=on_create)
for idempotent playbooks, or use O(update_password=always) (default) if you need to ensure passwords are updated.
extends_documentation_fragment:
- community.general.keycloak
- community.general.keycloak.actiongroup_keycloak
@@ -146,6 +264,7 @@ EXAMPLES = r"""
active: true
priority: 120
algorithm: RS256
- name: Manage Keycloak realm key and certificate
community.general.keycloak_realm_key:
name: custom
@@ -163,6 +282,178 @@ EXAMPLES = r"""
active: true
priority: 120
algorithm: RS256
- name: Create HMAC signing key (auto-generated)
community.general.keycloak_realm_key:
name: hmac-custom
state: present
parent_id: master
provider_id: hmac-generated
auth_keycloak_url: http://localhost:8080/auth
auth_username: keycloak
auth_password: keycloak
auth_realm: master
config:
enabled: true
active: true
priority: 100
algorithm: HS256
secret_size: 64
- name: Create AES encryption key (auto-generated)
community.general.keycloak_realm_key:
name: aes-custom
state: present
parent_id: master
provider_id: aes-generated
auth_keycloak_url: http://localhost:8080/auth
auth_username: keycloak
auth_password: keycloak
auth_realm: master
config:
enabled: true
active: true
priority: 100
secret_size: 16
- name: Create ECDSA signing key (auto-generated)
community.general.keycloak_realm_key:
name: ecdsa-custom
state: present
parent_id: master
provider_id: ecdsa-generated
auth_keycloak_url: http://localhost:8080/auth
auth_username: keycloak
auth_password: keycloak
auth_realm: master
config:
enabled: true
active: true
priority: 100
algorithm: ES256
elliptic_curve: P-256
- name: Create RSA signing key (auto-generated)
community.general.keycloak_realm_key:
name: rsa-auto
state: present
parent_id: master
provider_id: rsa-generated
auth_keycloak_url: http://localhost:8080/auth
auth_username: keycloak
auth_password: keycloak
auth_realm: master
config:
enabled: true
active: true
priority: 100
algorithm: RS256
key_size: 2048
- name: Remove default HMAC key
community.general.keycloak_realm_key:
name: hmac-generated
state: absent
parent_id: myrealm
provider_id: hmac-generated
auth_keycloak_url: http://localhost:8080/auth
auth_username: keycloak
auth_password: keycloak
auth_realm: master
config:
priority: 100
- name: Create RSA encryption key (auto-generated)
community.general.keycloak_realm_key:
name: rsa-enc-auto
state: present
parent_id: master
provider_id: rsa-enc-generated
auth_keycloak_url: http://localhost:8080/auth
auth_username: keycloak
auth_password: keycloak
auth_realm: master
config:
enabled: true
active: true
priority: 100
algorithm: RSA-OAEP
key_size: 2048
- name: Create ECDH key exchange key (auto-generated)
community.general.keycloak_realm_key:
name: ecdh-custom
state: present
parent_id: master
provider_id: ecdh-generated
auth_keycloak_url: http://localhost:8080/auth
auth_username: keycloak
auth_password: keycloak
auth_realm: master
config:
enabled: true
active: true
priority: 100
algorithm: ECDH_ES
elliptic_curve: P-256
- name: Create EdDSA signing key (auto-generated)
community.general.keycloak_realm_key:
name: eddsa-custom
state: present
parent_id: master
provider_id: eddsa-generated
auth_keycloak_url: http://localhost:8080/auth
auth_username: keycloak
auth_password: keycloak
auth_realm: master
config:
enabled: true
active: true
priority: 100
elliptic_curve: Ed25519
- name: Import key from Java Keystore (always update passwords)
community.general.keycloak_realm_key:
name: jks-imported
state: present
parent_id: master
provider_id: java-keystore
auth_keycloak_url: http://localhost:8080/auth
auth_username: keycloak
auth_password: keycloak
auth_realm: master
# update_password: always is the default - passwords are always sent to Keycloak
config:
enabled: true
active: true
priority: 100
algorithm: RS256
keystore: /opt/keycloak/conf/keystore.jks
keystore_password: "{{ keystore_password }}"
key_alias: mykey
key_password: "{{ key_password }}"
- name: Import key from Java Keystore (idempotent - only set password on create)
community.general.keycloak_realm_key:
name: jks-idempotent
state: present
parent_id: master
provider_id: java-keystore
auth_keycloak_url: http://localhost:8080/auth
auth_username: keycloak
auth_password: keycloak
auth_realm: master
update_password: on_create # Only send passwords when creating, preserve existing on update
config:
enabled: true
active: true
priority: 100
algorithm: RS256
keystore: /opt/keycloak/conf/keystore.jks
keystore_password: "{{ keystore_password }}"
key_alias: mykey
key_password: "{{ key_password }}"
"""
RETURN = r"""
@@ -219,8 +510,37 @@ end_state:
"140"
]
}
key_info:
description:
- Cryptographic key metadata fetched from the realm keys endpoint.
- Only returned for V(java-keystore) provider when O(state=present) and not in check mode.
- This includes the key ID (kid) and certificate fingerprint, which can be used to detect
if the actual cryptographic key changed.
type: dict
returned: when O(provider_id=java-keystore) and O(state=present)
version_added: 12.4.0
contains:
kid:
description: The key ID (kid) - unique identifier for the cryptographic key.
type: str
sample: bN7p5Nc_V2M7N_-mb5vVSRVPKq5qD_OuARInB9ofsJ0
certificate_fingerprint:
description: SHA256 fingerprint of the certificate in colon-separated hex format.
type: str
sample: "A1:B2:C3:D4:E5:F6:..."
status:
description: The key status (ACTIVE, PASSIVE, DISABLED).
type: str
sample: ACTIVE
valid_to:
description: Certificate expiration timestamp in milliseconds since epoch.
type: int
sample: 1801789047000
"""
import base64
import binascii
import hashlib
from copy import deepcopy
from urllib.parse import urlencode
@@ -234,6 +554,113 @@ from ansible_collections.community.general.plugins.module_utils.identity.keycloa
keycloak_argument_spec,
)
# Provider IDs that require private_key and certificate
IMPORTED_KEY_PROVIDERS = ["rsa", "rsa-enc"]
# Provider IDs that import keys from Java Keystore
KEYSTORE_PROVIDERS = ["java-keystore"]
# Provider IDs that auto-generate keys
GENERATED_KEY_PROVIDERS = [
"rsa-generated",
"rsa-enc-generated",
"hmac-generated",
"aes-generated",
"ecdsa-generated",
"ecdh-generated",
"eddsa-generated",
]
# Mapping of Ansible parameter names to Keycloak config property names
# for cases where camel() conversion doesn't produce the correct result.
# Each provider type may use a different config key for elliptic curve.
CONFIG_PARAM_MAPPING = {
"elliptic_curve": "ecdsaEllipticCurveKey",
}
# Provider-specific config key names for elliptic_curve parameter
# ECDSA and ECDH both use the same curves (P-256, P-384, P-521) but different config keys
# EdDSA uses different curves (Ed25519, Ed448) with its own config key
ELLIPTIC_CURVE_CONFIG_KEYS = {
"ecdsa-generated": "ecdsaEllipticCurveKey",
"ecdh-generated": "ecdhEllipticCurveKey",
"eddsa-generated": "eddsaEllipticCurveKey",
}
# Valid algorithm choices per provider type
# Note: aes-generated and eddsa-generated don't use algorithm config
PROVIDER_ALGORITHMS = {
"rsa": ["RS256", "RS384", "RS512", "PS256", "PS384", "PS512"],
"rsa-enc": ["RSA1_5", "RSA-OAEP", "RSA-OAEP-256"],
"java-keystore": ["RS256", "RS384", "RS512", "PS256", "PS384", "PS512"],
"rsa-generated": ["RS256", "RS384", "RS512", "PS256", "PS384", "PS512"],
"rsa-enc-generated": ["RSA1_5", "RSA-OAEP", "RSA-OAEP-256"],
"hmac-generated": ["HS256", "HS384", "HS512"],
"ecdsa-generated": ["ES256", "ES384", "ES512"],
"ecdh-generated": ["ECDH_ES", "ECDH_ES_A128KW", "ECDH_ES_A192KW", "ECDH_ES_A256KW"],
}
# Providers that don't use the algorithm config parameter
# eddsa-generated: algorithm is determined by the elliptic curve (Ed25519 or Ed448)
# aes-generated: always uses AES algorithm
PROVIDERS_WITHOUT_ALGORITHM = ["aes-generated", "eddsa-generated"]
# Providers where the RS256 default is valid (for backward compatibility)
PROVIDERS_WITH_RS256_DEFAULT = ["rsa", "rsa-generated", "java-keystore"]
# Config keys that cannot be compared and must be removed from changesets/diffs.
# privateKey/certificate: Keycloak doesn't return private keys, certificates are generated dynamically.
# keystorePassword/keyPassword: Keycloak masks these with "**********" in API responses.
SENSITIVE_CONFIG_KEYS = ["privateKey", "certificate", "keystorePassword", "keyPassword"]
def remove_sensitive_config_keys(config):
for key in SENSITIVE_CONFIG_KEYS:
config.pop(key, None)
def get_keycloak_config_key(param_name, provider_id=None):
"""Convert Ansible parameter name to Keycloak config key.
Uses explicit mapping if available, otherwise applies camelCase conversion.
For elliptic_curve, the config key depends on the provider type.
"""
# Handle elliptic_curve specially - each provider uses a different config key
if param_name == "elliptic_curve" and provider_id in ELLIPTIC_CURVE_CONFIG_KEYS:
return ELLIPTIC_CURVE_CONFIG_KEYS[param_name]
if param_name in CONFIG_PARAM_MAPPING:
return CONFIG_PARAM_MAPPING[param_name]
return camel(param_name)
def compute_certificate_fingerprint(certificate_pem):
try:
cert_der = base64.b64decode(certificate_pem)
fingerprint = hashlib.sha256(cert_der).hexdigest().upper()
return ":".join(fingerprint[i : i + 2] for i in range(0, len(fingerprint), 2))
except (ValueError, binascii.Error, TypeError):
return None
def get_key_info_for_component(kc, realm, component_id):
try:
keys_response = kc.get_realm_keys_metadata_by_id(realm)
if not keys_response or "keys" not in keys_response:
return None
for key in keys_response.get("keys", []):
if key.get("providerId") == component_id:
return {
"kid": key.get("kid"),
"certificate_fingerprint": compute_certificate_fingerprint(key.get("certificate")),
"public_key": key.get("publicKey"),
"valid_to": key.get("validTo"),
"status": key.get("status"),
"algorithm": key.get("algorithm"),
"type": key.get("type"),
}
return None
except (KeyError, TypeError):
return None
def main():
"""
@@ -248,7 +675,22 @@ def main():
name=dict(type="str", required=True),
force=dict(type="bool", default=False),
parent_id=dict(type="str", required=True),
provider_id=dict(type="str", default="rsa", choices=["rsa", "rsa-enc"]),
provider_id=dict(
type="str",
default="rsa",
choices=[
"rsa",
"rsa-enc",
"java-keystore",
"rsa-generated",
"rsa-enc-generated",
"hmac-generated",
"aes-generated",
"ecdsa-generated",
"ecdh-generated",
"eddsa-generated",
],
),
config=dict(
type="dict",
options=dict(
@@ -268,12 +710,38 @@ def main():
"RSA1_5",
"RSA-OAEP",
"RSA-OAEP-256",
"HS256",
"HS384",
"HS512",
"ES256",
"ES384",
"ES512",
"AES",
"ECDH_ES",
"ECDH_ES_A128KW",
"ECDH_ES_A192KW",
"ECDH_ES_A256KW",
"Ed25519",
"Ed448",
],
),
private_key=dict(type="str", required=True, no_log=True),
certificate=dict(type="str", required=True),
private_key=dict(type="str", no_log=True),
certificate=dict(type="str"),
secret_size=dict(type="int", no_log=False),
key_size=dict(type="int"),
elliptic_curve=dict(type="str", choices=["P-256", "P-384", "P-521", "Ed25519", "Ed448"]),
keystore=dict(type="str", no_log=False),
keystore_password=dict(type="str", no_log=True),
key_alias=dict(type="str", no_log=False),
key_password=dict(type="str", no_log=True),
),
),
update_password=dict(
type="str",
default="always",
choices=["always", "on_create"],
no_log=False,
),
)
argument_spec.update(meta_args)
@@ -288,8 +756,61 @@ def main():
required_by={"refresh_token": "auth_realm"},
)
# Initialize the result object. Only "changed" seems to have special
# meaning for Ansible.
provider_id = module.params["provider_id"]
config = module.params["config"] or {}
state = module.params["state"]
# Validate that imported key providers have the required parameters
if state == "present" and provider_id in IMPORTED_KEY_PROVIDERS:
if not config.get("private_key"):
module.fail_json(msg=f"config.private_key is required for provider_id '{provider_id}'")
if config.get("certificate") is None:
module.fail_json(
msg=f"config.certificate is required for provider_id '{provider_id}' (use empty string for auto-generation)"
)
# Validate that java-keystore providers have the required parameters
if state == "present" and provider_id in KEYSTORE_PROVIDERS:
required_params = ["keystore", "keystore_password", "key_alias"]
missing = [p for p in required_params if not config.get(p)]
if missing:
module.fail_json(
msg=f"For provider_id=java-keystore, the following config options are required: {', '.join(missing)}"
)
# Validate algorithm for providers that use it
if state == "present":
algorithm = config.get("algorithm")
if provider_id in PROVIDER_ALGORITHMS:
valid_algorithms = PROVIDER_ALGORITHMS[provider_id]
if algorithm not in valid_algorithms:
msg = f"algorithm '{algorithm}' is not valid for provider_id '{provider_id}'."
if algorithm == "RS256" and provider_id not in PROVIDERS_WITH_RS256_DEFAULT:
msg += " The default 'RS256' is not valid for this provider."
msg += f" Valid choices are: {', '.join(valid_algorithms)}"
module.fail_json(msg=msg)
elif provider_id in PROVIDERS_WITHOUT_ALGORITHM and algorithm is not None and algorithm != "RS256":
# aes-generated and eddsa-generated don't use algorithm - only warn if user explicitly set a non-default value
module.warn(f"algorithm is ignored for provider_id '{provider_id}'")
# Validate elliptic curve for providers that use it
if state == "present":
elliptic_curve = config.get("elliptic_curve")
if provider_id in ["ecdsa-generated", "ecdh-generated"] and elliptic_curve is not None:
valid_curves = ["P-256", "P-384", "P-521"]
if elliptic_curve not in valid_curves:
module.fail_json(
msg=f"elliptic_curve '{elliptic_curve}' is not valid for provider_id '{provider_id}'. "
f"Valid choices are: {', '.join(valid_curves)}"
)
elif provider_id == "eddsa-generated" and elliptic_curve is not None:
valid_curves = ["Ed25519", "Ed448"]
if elliptic_curve not in valid_curves:
module.fail_json(
msg=f"elliptic_curve '{elliptic_curve}' is not valid for provider_id '{provider_id}'. "
f"Valid choices are: {', '.join(valid_curves)}"
)
result = dict(changed=False, msg="", end_state={}, diff=dict(before={}, after={}))
# This will include the current state of the realm key if it is already
@@ -305,7 +826,7 @@ def main():
kc = KeycloakAPI(module, connection_header)
params_to_ignore = list(keycloak_argument_spec().keys()) + ["state", "force", "parent_id"]
params_to_ignore = list(keycloak_argument_spec().keys()) + ["state", "force", "parent_id", "update_password"]
# Filter and map the parameters names that apply to the role
component_params = [x for x in module.params if x not in params_to_ignore and module.params.get(x) is not None]
@@ -332,18 +853,25 @@ def main():
#
for component_param in component_params:
if component_param == "config":
for config_param in module.params.get("config"):
changeset["config"][camel(config_param)] = []
raw_value = module.params.get("config")[config_param]
for config_param in module.params["config"]:
raw_value = module.params["config"][config_param]
# Optional params (secret_size, key_size, elliptic_curve) default to None.
# Skip them to avoid sending str(None) = "None" as a config value to Keycloak.
if raw_value is None:
continue
# Use custom mapping if available, otherwise camelCase
# Pass provider_id for elliptic_curve which uses different config keys per provider
keycloak_key = get_keycloak_config_key(config_param, provider_id)
changeset["config"][keycloak_key] = []
if isinstance(raw_value, bool):
value = str(raw_value).lower()
else:
value = str(raw_value)
changeset["config"][camel(config_param)].append(value)
changeset["config"][keycloak_key].append(value)
else:
# No need for camelcase in here as these are one word parameters
new_param_value = module.params.get(component_param)
new_param_value = module.params[component_param]
changeset[camel(component_param)] = new_param_value
# As provider_type is not a module parameter we have to add it to the
@@ -354,22 +882,14 @@ def main():
# changes to the current state.
changeset_copy = deepcopy(changeset)
# It is not possible to compare current keys to desired keys, because the
# certificate parameter is a base64-encoded binary blob created on the fly
# when a key is added. Moreover, the Keycloak Admin API does not seem to
# return the value of the private key for comparison. So, in effect, it we
# just have to ignore changes to the keys. However, as the privateKey
# parameter needs be present in the JSON payload, any changes done to any
# other parameters (e.g. config.priority) will trigger update of the keys
# as a side-effect.
del changeset_copy["config"]["privateKey"]
del changeset_copy["config"]["certificate"]
# Remove keys that cannot be compared: privateKey/certificate (not returned
# by Keycloak API) and keystore passwords (masked with "**********").
# The actual values remain in 'changeset' for the API payload.
remove_sensitive_config_keys(changeset_copy["config"])
# Make it easier to refer to current module parameters
name = module.params.get("name")
force = module.params.get("force")
state = module.params.get("state")
parent_id = module.params.get("parent_id")
name = module.params["name"]
force = module.params["force"]
parent_id = module.params["parent_id"]
# Get a list of all Keycloak components that are of keyprovider type.
realm_keys = kc.get_components(urlencode(dict(type=provider_type)), parent_id)
@@ -402,16 +922,50 @@ def main():
result["changed"] = True
# Compare parameters under the "config" key
# Note: Keycloak API may not return all config fields for default keys
# (e.g., 'active', 'enabled', 'algorithm' may be missing). Handle this
# gracefully by using .get() with defaults.
for p, v in changeset_copy["config"].items():
before_realm_key["config"][p] = key["config"][p]
if v != key["config"][p]:
changes += f"config.{p}: {key['config'][p]} -> {v}, "
# Get the current value, defaulting to our expected value if not present
# This handles the case where Keycloak doesn't return certain fields
# for default/generated keys
current_value = key["config"].get(p, v)
before_realm_key["config"][p] = current_value
if v != current_value:
changes += f"config.{p}: {current_value} -> {v}, "
result["changed"] = True
# Sanitize linefeeds for the privateKey. Without this the JSON payload
# will be invalid.
changeset["config"]["privateKey"][0] = changeset["config"]["privateKey"][0].replace("\\n", "\n")
changeset["config"]["certificate"][0] = changeset["config"]["certificate"][0].replace("\\n", "\n")
# For java-keystore provider, also fetch and compare key info (kid)
# This detects if the actual cryptographic key changed even when
# other config parameters remain the same
if provider_id in KEYSTORE_PROVIDERS:
current_key_info = get_key_info_for_component(kc, parent_id, key_id)
if current_key_info:
before_realm_key["key_info"] = {
"kid": current_key_info.get("kid"),
"certificate_fingerprint": current_key_info.get("certificate_fingerprint"),
}
# Sanitize linefeeds for the privateKey and certificate (only for imported providers).
# Without this the JSON payload will be invalid.
if "privateKey" in changeset["config"]:
changeset["config"]["privateKey"][0] = changeset["config"]["privateKey"][0].replace("\\n", "\n")
if "certificate" in changeset["config"]:
changeset["config"]["certificate"][0] = changeset["config"]["certificate"][0].replace("\\n", "\n")
# For java-keystore provider: handle update_password parameter
# When update_password=on_create and we're updating an existing component,
# replace actual passwords with the masked value ("**********") that Keycloak
# returns in API responses. When Keycloak receives this masked value, it
# preserves the existing password instead of updating it.
# This makes the module idempotent for password fields.
update_password = module.params["update_password"]
if provider_id in KEYSTORE_PROVIDERS and key_id and update_password == "on_create":
SECRET_VALUE = "**********"
if "keystorePassword" in changeset["config"]:
changeset["config"]["keystorePassword"] = [SECRET_VALUE]
if "keyPassword" in changeset["config"]:
changeset["config"]["keyPassword"] = [SECRET_VALUE]
# Check all the possible states of the resource and do what is needed to
# converge current state with desired state (create, update or delete
@@ -419,8 +973,7 @@ def main():
if key_id and state == "present":
if result["changed"]:
if module._diff:
del before_realm_key["config"]["privateKey"]
del before_realm_key["config"]["certificate"]
remove_sensitive_config_keys(before_realm_key["config"])
result["diff"] = dict(before=before_realm_key, after=changeset_copy)
if module.check_mode:
@@ -436,10 +989,26 @@ def main():
result["msg"] = f"Realm key {name} was in sync"
result["end_state"] = changeset_copy
# For java-keystore provider, include key info in end_state
if provider_id in KEYSTORE_PROVIDERS:
if not module.check_mode:
key_info = get_key_info_for_component(kc, parent_id, key_id)
if key_info:
result["end_state"]["key_info"] = {
"kid": key_info.get("kid"),
"certificate_fingerprint": key_info.get("certificate_fingerprint"),
"status": key_info.get("status"),
"valid_to": key_info.get("valid_to"),
}
else:
module.warn(
f"Key component '{name}' exists but no active key was found. "
"This may indicate an incorrect keystore password, path, or alias."
)
elif key_id and state == "absent":
if module._diff:
del before_realm_key["config"]["privateKey"]
del before_realm_key["config"]["certificate"]
remove_sensitive_config_keys(before_realm_key["config"])
result["diff"] = dict(before=before_realm_key, after={})
if module.check_mode:
@@ -463,6 +1032,28 @@ def main():
result["changed"] = True
result["msg"] = f"Realm key {name} created"
# For java-keystore provider, fetch and include key info after creation
if provider_id in KEYSTORE_PROVIDERS:
# We need to get the component ID first (it was just created)
realm_keys_after = kc.get_components(urlencode(dict(type=provider_type)), parent_id)
for k in realm_keys_after:
if k["name"] == name:
new_key_id = k["id"]
key_info = get_key_info_for_component(kc, parent_id, new_key_id)
if key_info:
changeset_copy["key_info"] = {
"kid": key_info.get("kid"),
"certificate_fingerprint": key_info.get("certificate_fingerprint"),
"status": key_info.get("status"),
"valid_to": key_info.get("valid_to"),
}
else:
module.warn(
f"Key component '{name}' was created but no active key was found. "
"This may indicate an incorrect keystore password, path, or alias."
)
break
result["end_state"] = changeset_copy
elif not key_id and state == "absent":
result["changed"] = False

View File

@@ -0,0 +1,398 @@
# !/usr/bin/python
# Copyright Jakub Danek <danek.ja@gmail.com>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or
# https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
from __future__ import annotations
DOCUMENTATION = r"""
module: keycloak_realm_localization
short_description: Allows management of Keycloak realm localization overrides via the Keycloak API
version_added: 12.4.0
description:
- This module allows you to manage per-locale message overrides for a Keycloak realm using the Keycloak Admin REST API.
- Requires access via OpenID Connect; the connecting user/client must have sufficient privileges.
- The names of module options are snake_cased versions of the names found in the Keycloak API.
attributes:
check_mode:
support: full
diff_mode:
support: full
options:
force:
description:
- If V(false), only the keys listed in the O(overrides) are modified by this module. Any other pre-existing
keys are ignored.
- If V(true), all locale overrides are made to match configuration of this module. For example any keys
missing from the O(overrides) are removed regardless of O(state) value.
type: bool
default: false
locale:
description:
- Locale code for which the overrides apply (for example, V(en), V(fi), V(de)).
type: str
required: true
parent_id:
description:
- Name of the realm that owns the locale overrides.
type: str
required: true
state:
description:
- Desired state of localization overrides for the given locale.
- On V(present), the set of overrides for the locale are made to match O(overrides).
If O(force) is V(true) keys not listed in O(overrides) are removed,
and the listed keys are created or updated.
If O(force) is V(false) keys not listed in O(overrides) are ignored,
and the listed keys are created or updated.
- On V(absent), overrides for the locale is removed. If O(force) is V(true), all keys are removed.
If O(force) is V(false), only the keys listed in O(overrides) are removed.
type: str
choices: ['present', 'absent']
default: present
overrides:
description:
- List of overrides to ensure for the locale when O(state=present). Each item is a mapping with
the record's O(overrides[].key) and its O(overrides[].value).
- Ignored when O(state=absent).
type: list
elements: dict
default: []
suboptions:
key:
description:
- The message key to override.
type: str
required: true
value:
description:
- The override value for the message key. If omitted, value defaults to an empty string.
type: str
default: ""
required: false
seealso:
- module: community.general.keycloak_realm
description: You can specify list of supported locales using O(community.general.keycloak_realm#module:supported_locales).
extends_documentation_fragment:
- community.general.keycloak
- community.general.keycloak.actiongroup_keycloak
- community.general.attributes
author: Jakub Danek (@danekja)
"""
EXAMPLES = r"""
- name: Replace all overrides for locale "en" (credentials auth)
community.general.keycloak_realm_localization:
auth_client_id: admin-cli
auth_keycloak_url: https://auth.example.com/auth
auth_realm: master
auth_username: USERNAME
auth_password: PASSWORD
parent_id: my-realm
locale: en
state: present
force: true
overrides:
- key: greeting
value: "Hello"
- key: farewell
value: "Bye"
delegate_to: localhost
- name: Replace listed overrides for locale "en" (credentials auth)
community.general.keycloak_realm_localization:
auth_client_id: admin-cli
auth_keycloak_url: https://auth.example.com/auth
auth_realm: master
auth_username: USERNAME
auth_password: PASSWORD
parent_id: my-realm
locale: en
state: present
force: false
overrides:
- key: greeting
value: "Hello"
- key: farewell
value: "Bye"
delegate_to: localhost
- name: Ensure only one override exists for locale "fi" (token auth)
community.general.keycloak_realm_localization:
auth_client_id: admin-cli
auth_keycloak_url: https://auth.example.com/auth
token: TOKEN
parent_id: my-realm
locale: fi
state: present
force: true
overrides:
- key: app.title
value: "Sovellukseni"
delegate_to: localhost
- name: Remove all overrides for locale "de"
community.general.keycloak_realm_localization:
auth_client_id: admin-cli
auth_keycloak_url: https://auth.example.com/auth
auth_realm: master
auth_username: USERNAME
auth_password: PASSWORD
parent_id: my-realm
locale: de
state: absent
force: true
delegate_to: localhost
- name: Remove only the listed overrides for locale "de"
community.general.keycloak_realm_localization:
auth_client_id: admin-cli
auth_keycloak_url: https://auth.example.com/auth
auth_realm: master
auth_username: USERNAME
auth_password: PASSWORD
parent_id: my-realm
locale: de
state: absent
force: false
overrides:
- key: app.title
- key: foo
- key: bar
delegate_to: localhost
"""
RETURN = r"""
end_state:
description:
- Final state of localization overrides for the locale after module execution.
- Contains the O(locale) and the list of O(overrides) as key/value items.
returned: on success
type: dict
contains:
locale:
description: The locale code affected.
type: str
sample: en
overrides:
description: The list of overrides that exist after execution.
type: list
elements: dict
sample:
- key: greeting
value: Hello
- key: farewell
value: Bye
"""
from copy import deepcopy
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.identity.keycloak.keycloak import (
KeycloakAPI,
KeycloakError,
get_token,
keycloak_argument_spec,
)
def _normalize_overrides(current: dict | None) -> list[dict]:
"""
Accepts:
- dict: {'k1': 'v1', ...}
Return a sorted list of {'key', 'value'}.
This helper provides a consistent shape for downstream comparison/diff logic.
"""
if not current:
return []
return [{"key": k, "value": v} for k, v in sorted(current.items())]
def main():
argument_spec = keycloak_argument_spec()
# Single override record structure
overrides_spec = dict(
key=dict(type="str", no_log=False, required=True),
value=dict(type="str", default=""),
)
meta_args = dict(
locale=dict(type="str", required=True),
parent_id=dict(type="str", required=True),
state=dict(type="str", default="present", choices=["present", "absent"]),
overrides=dict(type="list", elements="dict", options=overrides_spec, default=[]),
force=dict(type="bool", default=False),
)
argument_spec.update(meta_args)
module = AnsibleModule(
argument_spec=argument_spec,
supports_check_mode=True,
required_one_of=([["token", "auth_realm", "auth_username", "auth_password"]]),
required_together=([["auth_realm", "auth_username", "auth_password"]]),
)
result = dict(changed=False, msg="", end_state={}, diff=dict(before={}, after={}))
# Obtain access token, initialize API
try:
connection_header = get_token(module.params)
except KeycloakError as e:
module.fail_json(msg=str(e))
kc = KeycloakAPI(module, connection_header)
# Convenience locals for frequently used parameters
locale = module.params["locale"]
state = module.params["state"]
parent_id = module.params["parent_id"]
force = module.params["force"]
desired_raw = module.params["overrides"]
desired_overrides = _normalize_overrides({r["key"]: r.get("value") for r in desired_raw})
old_overrides = _normalize_overrides(kc.get_localization_values(locale, parent_id) or {})
before = {
"locale": locale,
"overrides": deepcopy(old_overrides),
}
# Proposed state used for diff reporting
changeset = {
"locale": locale,
"overrides": [],
}
result["changed"] = False
if state == "present":
changeset["overrides"] = deepcopy(desired_overrides)
# Compute two sets:
# - to_update: keys missing or with different values
# - to_remove: keys existing in current state but not in desired
to_update = []
to_remove = deepcopy(old_overrides)
# Mark updates and remove matched ones from to_remove
for record in desired_overrides:
override_found = False
for override in to_remove:
if override["key"] == record["key"]:
override_found = True
# Value differs -> update needed
if override["value"] != record["value"]:
result["changed"] = True
to_update.append(record)
# Remove processed item so what's left in to_remove are deletions
to_remove.remove(override)
break
if not override_found:
# New key, must be created
to_update.append(record)
result["changed"] = True
# ignore any left-overs in to_remove, force is false
if not force:
changeset["overrides"].extend(to_remove)
to_remove = []
if to_remove:
result["changed"] = True
if result["changed"]:
if module._diff:
result["diff"] = dict(before=before, after=changeset)
if module.check_mode:
result["msg"] = f"Locale {locale} overrides would be updated."
else:
for override in to_remove:
kc.delete_localization_value(locale, override["key"], parent_id)
for override in to_update:
kc.set_localization_value(locale, override["key"], override["value"], parent_id)
result["msg"] = f"Locale {locale} overrides have been updated."
else:
result["msg"] = f"Locale {locale} overrides are in sync."
# For accurate end_state, read back from API unless we are in check_mode
if not module.check_mode:
final_overrides = _normalize_overrides(kc.get_localization_values(locale, parent_id) or {})
else:
final_overrides = ["overrides"]
result["end_state"] = {"locale": locale, "overrides": final_overrides}
elif state == "absent":
if force:
to_remove = old_overrides
else:
# touch only overrides listed in parameters, leave the rest be
to_remove = deepcopy(desired_overrides)
to_keep = deepcopy(old_overrides)
for override in to_remove:
found = False
for keep in to_keep:
if override["key"] == keep["key"]:
to_keep.remove(keep)
found = True
break
if not found:
to_remove.remove(override)
changeset["overrides"] = to_keep
if to_remove:
result["changed"] = True
if module._diff:
result["diff"] = dict(before=before, after=changeset)
if module.check_mode:
if result["changed"]:
result["msg"] = f"{len(to_remove)} overrides for locale {locale} would be deleted."
else:
result["msg"] = f"No overrides for locale {locale} to be deleted."
else:
for override in to_remove:
kc.delete_localization_value(locale, override["key"], parent_id)
if result["changed"]:
result["msg"] = f"{len(to_remove)} overrides for locale {locale} deleted."
else:
result["msg"] = f"No overrides for locale {locale} to be deleted."
result["end_state"] = changeset
module.exit_json(**result)
if __name__ == "__main__":
main()

View File

@@ -744,15 +744,30 @@ def normalize_kc_comp(comp):
def sanitize(comp):
def sanitize_value(v):
"""Convert list values: single-element lists to strings, multi-element lists sorted alphabetically, others as-is."""
if isinstance(v, list):
if len(v) == 0:
return None
elif len(v) == 1:
return v[0]
else:
return sorted(v)
else:
return v
compcopy = deepcopy(comp)
if "config" in compcopy:
compcopy["config"] = {k: v[0] for k, v in compcopy["config"].items()}
compcopy["config"] = {k: sanitize_value(v) for k, v in compcopy["config"].items()}
# Remove None values (empty lists converted)
compcopy["config"] = {k: v for k, v in compcopy["config"].items() if v is not None}
if "bindCredential" in compcopy["config"]:
compcopy["config"]["bindCredential"] = "**********"
if "mappers" in compcopy:
for mapper in compcopy["mappers"]:
if "config" in mapper:
mapper["config"] = {k: v[0] for k, v in mapper["config"].items()}
mapper["config"] = {k: sanitize_value(v) for k, v in mapper["config"].items()}
mapper["config"] = {k: v for k, v in mapper["config"].items() if v is not None}
return compcopy
@@ -886,11 +901,15 @@ def main():
if mappers is not None:
for mapper in mappers:
if mapper.get("config") is not None:
mapper["config"] = {
k: [str(v).lower() if not isinstance(v, str) else v]
for k, v in mapper["config"].items()
if mapper["config"][k] is not None
}
new_config = {}
for k, v in mapper["config"].items():
if v is None:
continue
if isinstance(v, list):
new_config[k] = [str(item).lower() if not isinstance(item, str) else item for item in v]
else:
new_config[k] = [str(v).lower() if not isinstance(v, str) else v]
mapper["config"] = new_config
# Filter and map the parameters names that apply
comp_params = [

View File

@@ -356,9 +356,9 @@ def main():
if role_rep is not None:
role["name"] = role_rep["name"]
else:
role["name"] = kc.get_client_user_rolemapping_by_id(
uid=uid, cid=cid, rid=role.get("id"), realm=realm
)["name"]
role_rep = kc.get_client_user_rolemapping_by_id(uid=uid, cid=cid, rid=role.get("id"), realm=realm)
if role_rep is not None:
role["name"] = role_rep["name"]
if role.get("name") is None:
module.fail_json(
msg=f"Could not fetch role {role.get('id')} for client_id {client_id} or realm {realm}"

View File

@@ -465,17 +465,25 @@ class MavenDownloader:
content = self._getContent(self.base + path, f"Failed to retrieve the maven metadata file: {path}")
xml = etree.fromstring(content)
for snapshotArtifact in xml.xpath("/metadata/versioning/snapshotVersions/snapshotVersion"):
classifier = snapshotArtifact.xpath("classifier/text()")
artifact_classifier = classifier[0] if classifier else ""
extension = snapshotArtifact.xpath("extension/text()")
artifact_extension = extension[0] if extension else ""
if artifact_classifier == artifact.classifier and artifact_extension == artifact.extension:
return self._uri_for_artifact(artifact, snapshotArtifact.xpath("value/text()")[0])
candidates = []
for snapshot_artifact in xml.xpath("/metadata/versioning/snapshotVersions/snapshotVersion"):
classifier = snapshot_artifact.xpath("classifier/text()")
extension = snapshot_artifact.xpath("extension/text()")
if (classifier[0] if classifier else "") == artifact.classifier and (
extension[0] if extension else ""
) == artifact.extension:
value = snapshot_artifact.xpath("value/text()")
updated = snapshot_artifact.xpath("updated/text()")
if value:
candidates.append((updated[0] if updated else "", value[0]))
if candidates:
# updated is yyyymmddHHMMSS, so lexical max == newest
return self._uri_for_artifact(artifact, max(candidates, key=lambda item: item[0])[1])
timestamp_xmlpath = xml.xpath("/metadata/versioning/snapshot/timestamp/text()")
if timestamp_xmlpath:
build_number_xmlpath = xml.xpath("/metadata/versioning/snapshot/buildNumber/text()")
if timestamp_xmlpath and build_number_xmlpath:
timestamp = timestamp_xmlpath[0]
build_number = xml.xpath("/metadata/versioning/snapshot/buildNumber/text()")[0]
build_number = build_number_xmlpath[0]
return self._uri_for_artifact(
artifact, artifact.version.replace("SNAPSHOT", f"{timestamp}-{build_number}")
)

View File

@@ -248,6 +248,9 @@ class RecordManager:
module.fail_json(msg="Missing key_secret")
except binascii_error as e:
module.fail_json(msg=f"TSIG key error: {e}")
else:
self.keyring = None
self.keyname = None
if module.params["zone"] is None:
if module.params["record"][-1] != ".":

View File

@@ -467,10 +467,10 @@ class ImageModule(OpenNebulaModule):
return None
def get_image_by_name(self, image_name):
return self.get_image(lambda image: (image_name == image.NAME))
return self.get_image(lambda image: image_name == image.NAME)
def get_image_by_id(self, image_id):
return self.get_image(lambda image: (image_id == image.ID))
return self.get_image(lambda image: image_id == image.ID)
def get_image_instance(self, requested_id, requested_name):
# Using 'if requested_id:' doesn't work properly when requested_id=0

View File

@@ -330,11 +330,11 @@ def get_service(module, auth, pred):
def get_service_by_id(module, auth, service_id):
return get_service(module, auth, lambda service: (int(service["ID"]) == int(service_id))) if service_id else None
return get_service(module, auth, lambda service: int(service["ID"]) == int(service_id)) if service_id else None
def get_service_by_name(module, auth, service_name):
return get_service(module, auth, lambda service: (service["NAME"] == service_name))
return get_service(module, auth, lambda service: service["NAME"] == service_name)
def get_service_info(module, auth, service):
@@ -681,13 +681,11 @@ def delete_service(module, auth, service_id):
def get_template_by_name(module, auth, template_name):
return get_template(module, auth, lambda template: (template["NAME"] == template_name))
return get_template(module, auth, lambda template: template["NAME"] == template_name)
def get_template_by_id(module, auth, template_id):
return (
get_template(module, auth, lambda template: (int(template["ID"]) == int(template_id))) if template_id else None
)
return get_template(module, auth, lambda template: int(template["ID"]) == int(template_id)) if template_id else None
def get_template_id(module, auth, requested_id, requested_name):

View File

@@ -226,10 +226,10 @@ class TemplateModule(OpenNebulaModule):
return None
def get_template_by_id(self, template_id, filter):
return self.get_template(lambda template: (template_id == template.ID), filter)
return self.get_template(lambda template: template_id == template.ID, filter)
def get_template_by_name(self, name, filter):
return self.get_template(lambda template: (name == template.NAME), filter)
return self.get_template(lambda template: name == template.NAME, filter)
def get_template_instance(self, requested_id, requested_name, filter):
if requested_id:

View File

@@ -766,11 +766,11 @@ def get_template(module, client, predicate):
def get_template_by_name(module, client, template_name):
return get_template(module, client, lambda template: (template_name == template.NAME))
return get_template(module, client, lambda template: template_name == template.NAME)
def get_template_by_id(module, client, template_id):
return get_template(module, client, lambda template: (template_id == template.ID))
return get_template(module, client, lambda template: template_id == template.ID)
def get_template_id(module, client, requested_id, requested_name):
@@ -805,11 +805,11 @@ def get_datastore(module, client, predicate):
def get_datastore_by_name(module, client, datastore_name):
return get_datastore(module, client, lambda datastore: (datastore_name == datastore.NAME))
return get_datastore(module, client, lambda datastore: datastore_name == datastore.NAME)
def get_datastore_by_id(module, client, datastore_id):
return get_datastore(module, client, lambda datastore: (datastore_id == datastore.ID))
return get_datastore(module, client, lambda datastore: datastore_id == datastore.ID)
def get_datastore_id(module, client, requested_id, requested_name):
@@ -1396,25 +1396,21 @@ def wait_for_running(module, client, vm, wait_timeout):
client,
vm,
wait_timeout,
lambda state, lcm_state: (state in [VM_STATES.index("ACTIVE")] and lcm_state in [LCM_STATES.index("RUNNING")]),
lambda state, lcm_state: state in [VM_STATES.index("ACTIVE")] and lcm_state in [LCM_STATES.index("RUNNING")],
)
def wait_for_done(module, client, vm, wait_timeout):
return wait_for_state(
module, client, vm, wait_timeout, lambda state, lcm_state: (state in [VM_STATES.index("DONE")])
)
return wait_for_state(module, client, vm, wait_timeout, lambda state, lcm_state: state in [VM_STATES.index("DONE")])
def wait_for_hold(module, client, vm, wait_timeout):
return wait_for_state(
module, client, vm, wait_timeout, lambda state, lcm_state: (state in [VM_STATES.index("HOLD")])
)
return wait_for_state(module, client, vm, wait_timeout, lambda state, lcm_state: state in [VM_STATES.index("HOLD")])
def wait_for_poweroff(module, client, vm, wait_timeout):
return wait_for_state(
module, client, vm, wait_timeout, lambda state, lcm_state: (state in [VM_STATES.index("POWEROFF")])
module, client, vm, wait_timeout, lambda state, lcm_state: state in [VM_STATES.index("POWEROFF")]
)

View File

@@ -320,10 +320,10 @@ class NetworksModule(OpenNebulaModule):
return None
def get_template_by_id(self, template_id):
return self.get_template(lambda template: (template_id == template.ID))
return self.get_template(lambda template: template_id == template.ID)
def get_template_by_name(self, name):
return self.get_template(lambda template: (name == template.NAME))
return self.get_template(lambda template: name == template.NAME)
def get_template_instance(self, requested_id, requested_name):
if requested_id:

View File

@@ -133,7 +133,7 @@ def ensure(module, state, packages, params):
"subcommand": "install",
},
"latest": {
"filter": lambda p: (not is_installed(module, p) or not is_latest(module, p)),
"filter": lambda p: not is_installed(module, p) or not is_latest(module, p),
"subcommand": "install",
},
"absent": {

View File

@@ -121,18 +121,26 @@ import operator
import re
import sys
from ansible.module_utils.basic import AnsibleModule
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
HAS_IMPORTLIB_METADATA = False
try:
import importlib.metadata
HAS_IMPORTLIB_METADATA = True
except ImportError:
pass
HAS_DISTUTILS = False
try:
import pkg_resources
from ansible_collections.community.general.plugins.module_utils.version import LooseVersion
HAS_DISTUTILS = True
except ImportError:
pass
from ansible.module_utils.basic import AnsibleModule
operations = {
"<=": operator.le,
">=": operator.ge,
@@ -155,9 +163,9 @@ def main():
argument_spec=dict(dependencies=dict(type="list", elements="str", default=[])),
supports_check_mode=True,
)
if not HAS_DISTUTILS:
if not HAS_DISTUTILS and not HAS_IMPORTLIB_METADATA:
module.fail_json(
msg='Could not import "distutils" and "pkg_resources" libraries to introspect python environment.',
msg='Could not import "pkg_resources" or "importlib.metadata" libraries to introspect Python environment.',
python=sys.executable,
python_version=sys.version,
python_version_info=python_version_info,
@@ -180,12 +188,20 @@ def main():
module.fail_json(
msg=f"Failed to parse version requirement '{dep}'. Operator must be one of >, <, <=, >=, or =="
)
try:
existing = pkg_resources.get_distribution(pkg).version
except pkg_resources.DistributionNotFound:
# not there
results["not_found"].append(pkg)
continue
if HAS_DISTUTILS:
try:
existing = pkg_resources.get_distribution(pkg).version
except pkg_resources.DistributionNotFound:
# not there
results["not_found"].append(pkg)
continue
else:
try:
existing = importlib.metadata.version(pkg)
except importlib.metadata.PackageNotFoundError:
# not there
results["not_found"].append(pkg)
continue
if op is None and version is None:
results["valid"][pkg] = {
"installed": existing,

View File

@@ -29,9 +29,10 @@ options:
proto:
description:
- Protocol for the specified port.
- Support for V(dccp) and V(sctp) has been added in community.general 12.4.0.
type: str
required: true
choices: [tcp, udp]
choices: [tcp, udp, dccp, sctp]
setype:
description:
- SELinux type for the specified port.
@@ -145,7 +146,7 @@ def semanage_port_get_ports(seport, setype, proto, local):
:param setype: SELinux type.
:type proto: str
:param proto: Protocol ('tcp' or 'udp')
:param proto: Protocol ('tcp', 'udp', 'dccp', 'sctp')
:rtype: list
:return: List of ports that have the specified SELinux type.
@@ -166,7 +167,7 @@ def semanage_port_get_type(seport, port, proto):
:param port: Port or port range (example: "8080", "8080-9090")
:type proto: str
:param proto: Protocol ('tcp' or 'udp')
:param proto: Protocol ('tcp', 'udp', 'dccp', 'sctp')
:rtype: tuple
:return: Tuple containing the SELinux type and MLS/MCS level, or None if not found.
@@ -194,7 +195,7 @@ def semanage_port_add(module, ports, proto, setype, do_reload, serange="s0", ses
:param ports: List of ports and port ranges to add (e.g. ["8080", "8080-9090"])
:type proto: str
:param proto: Protocol ('tcp' or 'udp')
:param proto: Protocol ('tcp', 'udp', 'dccp', 'sctp')
:type setype: str
:param setype: SELinux type
@@ -245,7 +246,7 @@ def semanage_port_del(module, ports, proto, setype, do_reload, sestore="", local
:param ports: List of ports and port ranges to delete (e.g. ["8080", "8080-9090"])
:type proto: str
:param proto: Protocol ('tcp' or 'udp')
:param proto: Protocol ('tcp', 'udp', 'dccp', 'sctp')
:type setype: str
:param setype: SELinux type.
@@ -281,7 +282,7 @@ def main():
argument_spec=dict(
ignore_selinux_state=dict(type="bool", default=False),
ports=dict(type="list", elements="str", required=True),
proto=dict(type="str", required=True, choices=["tcp", "udp"]),
proto=dict(type="str", required=True, choices=["tcp", "udp", "dccp", "sctp"]),
setype=dict(type="str", required=True),
state=dict(type="str", default="present", choices=["absent", "present"]),
reload=dict(type="bool", default=True),

View File

@@ -62,7 +62,7 @@
- block:
- include_tasks: remove_links.yml
- include_tasks: tests_family.yml
when: ansible_os_family == 'RedHat'
when: ansible_facts.os_family == 'RedHat'
# Cleanup
always:
@@ -94,6 +94,6 @@
# *Disable tests on Alpine*
# TODO: figure out whether there is an alternatives tool for Alpine
when:
- ansible_distribution != 'Fedora' or ansible_distribution_major_version|int > 24
- ansible_distribution != 'Archlinux'
- ansible_distribution != 'Alpine'
- ansible_facts.distribution != 'Fedora' or ansible_facts.distribution_major_version|int > 24
- ansible_facts.distribution != 'Archlinux'
- ansible_facts.distribution != 'Alpine'

View File

@@ -6,8 +6,8 @@
- include_vars: '{{ item }}'
with_first_found:
- files:
- '{{ ansible_os_family }}-{{ ansible_distribution_version }}.yml'
- '{{ ansible_os_family }}.yml'
- '{{ ansible_facts.os_family }}-{{ ansible_facts.distribution_version }}.yml'
- '{{ ansible_facts.os_family }}.yml'
- default.yml
paths: ../vars
- template:

View File

@@ -9,8 +9,8 @@
owner: root
group: root
mode: '0644'
when: with_alternatives or ansible_os_family != 'RedHat'
when: with_alternatives or ansible_facts.os_family != 'RedHat'
- file:
path: '{{ alternatives_dir }}/dummy'
state: absent
when: not with_alternatives and ansible_os_family == 'RedHat'
when: not with_alternatives and ansible_facts.os_family == 'RedHat'

View File

@@ -46,11 +46,11 @@
- name: 'check mode (manual: alternatives file existed, it has been updated)'
shell: 'head -n1 {{ alternatives_dir }}/dummy | grep "^manual$"'
when: ansible_os_family != 'RedHat' or with_alternatives or item != 1
when: ansible_facts.os_family != 'RedHat' or with_alternatives or item != 1
- name: 'check mode (auto: alternatives file didn''t exist, it has been created)'
shell: 'head -n1 {{ alternatives_dir }}/dummy | grep "^auto$"'
when: ansible_os_family == 'RedHat' and not with_alternatives and item == 1
when: ansible_facts.os_family == 'RedHat' and not with_alternatives and item == 1
- name: check that alternative has been updated
command: "grep -Pzq '/bin/dummy{{ item }}\\n' '{{ alternatives_dir }}/dummy'"

View File

@@ -17,4 +17,4 @@
mode: '{{ test_conf[2] }}'
# update-alternatives included in Fedora 26 (1.10) & Red Hat 7.4 (1.8) doesn't provide
# '--query' switch, 'link' is mandatory for these distributions.
when: ansible_os_family != 'RedHat' or test_conf[0]
when: ansible_facts.os_family != 'RedHat' or test_conf[0]

View File

@@ -11,21 +11,21 @@
# java >= 17 is not available in RHEL and CentOS7 repos, which is required for sdkmanager to run
- name: Bail out if not supported
when:
- "ansible_os_family == 'RedHat' and ansible_distribution_version is version('8.0', '<')"
- "ansible_facts.os_family == 'RedHat' and ansible_facts.distribution_version is version('8.0', '<')"
ansible.builtin.meta: end_play
- name: Run android_sdk tests
environment:
PATH: '{{ ansible_env.PATH }}:{{ android_sdk_location }}/cmdline-tools/latest/bin'
PATH: '{{ ansible_facts.env.PATH }}:{{ android_sdk_location }}/cmdline-tools/latest/bin'
block:
- import_tasks: setup.yml
- name: Run default tests
import_tasks: default-tests.yml
when: ansible_os_family != 'FreeBSD'
when: ansible_facts.os_family != 'FreeBSD'
# Most of the important Android SDK packages are not available on FreeBSD (like, build-tools, platform-tools and so on),
# but at least some of the functionality can be tested (like, downloading sources)
- name: Run FreeBSD tests
import_tasks: freebsd-tests.yml
when: ansible_os_family == 'FreeBSD'
when: ansible_facts.os_family == 'FreeBSD'

View File

@@ -13,10 +13,10 @@
vars:
params:
files:
- '{{ ansible_distribution }}-{{ ansible_distribution_version }}.yml'
- '{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml'
- '{{ ansible_distribution }}.yml'
- '{{ ansible_os_family }}.yml'
- '{{ ansible_facts.distribution }}-{{ ansible_facts.distribution_version }}.yml'
- '{{ ansible_facts.distribution }}-{{ ansible_facts.distribution_major_version }}.yml'
- '{{ ansible_facts.distribution }}.yml'
- '{{ ansible_facts.os_family }}.yml'
paths:
- '{{ role_path }}/vars'
@@ -27,7 +27,7 @@
- "{{ openjdk_pkg }}"
- unzip
state: present
when: ansible_os_family != 'Darwin'
when: ansible_facts.os_family != 'Darwin'
- name: Install dependencies (OSX)
block:
@@ -56,7 +56,7 @@
dest: "/Library/Java/JavaVirtualMachines/openjdk-17.jdk"
state: link
when:
- ansible_os_family == 'Darwin'
- ansible_facts.os_family == 'Darwin'
- name: Create Android SDK directory
file:

View File

@@ -9,7 +9,7 @@
# SPDX-License-Identifier: GPL-3.0-or-later
- meta: end_play
when: ansible_os_family not in ['Debian', 'Suse']
when: ansible_facts.os_family not in ['Debian', 'Suse']
- name: Enable mod_proxy
community.general.apache2_module:
@@ -24,12 +24,12 @@
- name: Add port 81
lineinfile:
path: "/etc/apache2/{{ 'ports.conf' if ansible_os_family == 'Debian' else 'listen.conf' }}"
path: "/etc/apache2/{{ 'ports.conf' if ansible_facts.os_family == 'Debian' else 'listen.conf' }}"
line: Listen 81
- name: Set up virtual host
copy:
dest: "/etc/apache2/{{ 'sites-available' if ansible_os_family == 'Debian' else 'vhosts.d' }}/000-apache2_mod_proxy-test.conf"
dest: "/etc/apache2/{{ 'sites-available' if ansible_facts.os_family == 'Debian' else 'vhosts.d' }}/000-apache2_mod_proxy-test.conf"
content: |
<VirtualHost *:81>
<Proxy balancer://mycluster>
@@ -62,7 +62,7 @@
owner: root
group: root
state: link
when: ansible_os_family not in ['Suse']
when: ansible_facts.os_family not in ['Suse']
- name: Restart Apache
service:

View File

@@ -65,7 +65,7 @@
state: present
- name: Debian/Ubuntu specific tests
when: "ansible_os_family == 'Debian'"
when: "ansible_facts.os_family == 'Debian'"
block:
- name: force disable of autoindex # bug #2499
community.general.apache2_module:
@@ -89,7 +89,7 @@
name: evasive
state: present
# TODO: fix for Debian 13 (Trixie)!
when: ansible_distribution != 'Debian' or ansible_distribution_major_version is version('13', '<')
when: ansible_facts.distribution != 'Debian' or ansible_facts.distribution_major_version is version('13', '<')
- name: use identifier to enable module, fix for https://github.com/ansible/ansible/issues/33669
community.general.apache2_module:

View File

@@ -28,10 +28,10 @@
- name: ensure that all test modules are disabled again
assert:
that: modules_before.stdout == modules_after.stdout
when: ansible_os_family in ['Debian', 'Suse']
when: ansible_facts.os_family in ['Debian', 'Suse']
# centos/RHEL does not have a2enmod/a2dismod
- name: include misleading warning test
include_tasks: 635-apache2-misleading-warning.yml
when: ansible_os_family in ['Debian']
when: ansible_facts.os_family in ['Debian']
# Suse has mpm_event module compiled within the base apache2

View File

@@ -7,7 +7,7 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: Run apk tests on Alpine
when: ansible_distribution in ['Alpine']
when: ansible_facts.distribution in ['Alpine']
block:
- name: Ensure vim is not installed
community.general.apk:

View File

@@ -12,7 +12,7 @@
cargo_environment:
# See https://github.com/rust-lang/cargo/issues/10230#issuecomment-1201662729:
CARGO_NET_GIT_FETCH_WITH_CLI: "true"
when: has_cargo | default(false) and ansible_distribution == 'Alpine'
when: has_cargo | default(false) and ansible_facts.distribution == 'Alpine'
- block:
- import_tasks: test_general.yml
- import_tasks: test_version.yml

View File

@@ -11,11 +11,11 @@
- set_fact:
has_cargo: true
when:
- ansible_system != 'FreeBSD'
- ansible_distribution != 'MacOSX'
- ansible_distribution != 'RedHat' or ansible_distribution_version is version('8.0', '>=')
- ansible_distribution != 'CentOS' or ansible_distribution_version is version('7.0', '>=')
- ansible_distribution != 'Ubuntu' or ansible_distribution_version is version('18', '>=')
- ansible_facts.system != 'FreeBSD'
- ansible_facts.distribution != 'MacOSX'
- ansible_facts.distribution != 'RedHat' or ansible_facts.distribution_version is version('8.0', '>=')
- ansible_facts.distribution != 'CentOS' or ansible_facts.distribution_version is version('7.0', '>=')
- ansible_facts.distribution != 'Ubuntu' or ansible_facts.distribution_version is version('18', '>=')
- block:
- name: Install rust (containing cargo)
@@ -25,7 +25,7 @@
- set_fact:
has_cargo: true
when:
- ansible_system == 'FreeBSD' and ansible_distribution_version is version('13.0', '>')
- ansible_facts.system == 'FreeBSD' and ansible_facts.distribution_version is version('13.0', '>')
- block:
- name: Download rustup
@@ -39,4 +39,4 @@
- set_fact:
rustup_cargo_bin: "{{ lookup('env', 'HOME') }}/.cargo/bin/cargo"
when:
- ansible_distribution != 'CentOS' or ansible_distribution_version is version('7.0', '>=')
- ansible_facts.distribution != 'CentOS' or ansible_facts.distribution_version is version('7.0', '>=')

View File

@@ -11,10 +11,10 @@
- name: Help debugging
debug:
msg: >-
distribution={{ ansible_distribution }},
distribution major version={{ ansible_distribution_major_version }},
os_family={{ ansible_os_family }},
Python version={{ ansible_python.version.major }}
distribution={{ ansible_facts.distribution }},
distribution major version={{ ansible_facts.distribution_major_version }},
os_family={{ ansible_facts.os_family }},
Python version={{ ansible_facts.python.version.major }}
- name: test cloud-init
# TODO: check for a workaround
@@ -23,13 +23,13 @@
# /etc/init/ureadahead.conf to /etc/init/ureadahead.conf.distrib
# https://bugs.launchpad.net/ubuntu/+source/ureadahead/+bug/997838
when:
- not (ansible_distribution == "Ubuntu" and ansible_distribution_major_version|int == 14)
- not (ansible_os_family == "Suse" and ansible_distribution_major_version|int == 15)
- not (ansible_distribution == "CentOS" and ansible_distribution_major_version|int == 8) # TODO: cannot start service
- not (ansible_distribution == 'Archlinux') # TODO: package seems to be broken, cannot be downloaded from mirrors?
- not (ansible_distribution == 'Alpine') # TODO: not sure what's wrong here, the module doesn't return what the tests expect
- not (ansible_distribution == 'Debian' and ansible_distribution_major_version|int == 13) # TODO: not sure what's wrong here, the module doesn't return what the tests expect
- not (ansible_distribution == 'Fedora' and ansible_distribution_major_version|int == 43) # TODO: not sure what's wrong here, the module doesn't return what the tests expect
- not (ansible_facts.distribution == "Ubuntu" and ansible_facts.distribution_major_version|int == 14)
- not (ansible_facts.os_family == "Suse" and ansible_facts.distribution_major_version|int == 15)
- not (ansible_facts.distribution == "CentOS" and ansible_facts.distribution_major_version|int == 8) # TODO: cannot start service
- not (ansible_facts.distribution == 'Archlinux') # TODO: package seems to be broken, cannot be downloaded from mirrors?
- not (ansible_facts.distribution == 'Alpine') # TODO: not sure what's wrong here, the module doesn't return what the tests expect
- not (ansible_facts.distribution == 'Debian' and ansible_facts.distribution_major_version|int == 13) # TODO: not sure what's wrong here, the module doesn't return what the tests expect
- not (ansible_facts.distribution == 'Fedora' and ansible_facts.distribution_major_version|int == 43) # TODO: not sure what's wrong here, the module doesn't return what the tests expect
block:
- name: setup install cloud-init
package:
@@ -41,7 +41,7 @@
user:
name: systemd-network
state: present
when: ansible_distribution == 'Fedora' and ansible_distribution_major_version|int >= 37
when: ansible_facts.distribution == 'Fedora' and ansible_facts.distribution_major_version|int >= 37
- name: setup run cloud-init
service:

View File

@@ -171,8 +171,8 @@ cmd_echo_tests:
- name: use cmd {{ remote_tmp_dir }}/echo
condition: >
{{
ansible_distribution != "MacOSX" and
not (ansible_distribution == "CentOS" and ansible_distribution_major_version is version('7.0', '<'))
ansible_facts.distribution != "MacOSX" and
not (ansible_facts.distribution == "CentOS" and ansible_facts.distribution_major_version is version('7.0', '<'))
}}
copy_to: "{{ remote_tmp_dir }}"
cmd: "{{ remote_tmp_dir }}/echo"
@@ -196,8 +196,8 @@ cmd_echo_tests:
cmd: echo
condition: >
{{
ansible_distribution != "MacOSX" and
not (ansible_distribution == "CentOS" and ansible_distribution_major_version is version('7.0', '<'))
ansible_facts.distribution != "MacOSX" and
not (ansible_facts.distribution == "CentOS" and ansible_facts.distribution_major_version is version('7.0', '<'))
}}
copy_to: "{{ remote_tmp_dir }}"
path_prefix: "{{ remote_tmp_dir }}"

View File

@@ -11,7 +11,7 @@
- name: Install Consul and test
vars:
consul_version: 1.13.2
consul_uri: https://releases.hashicorp.com/consul/{{ consul_version }}/consul_{{ consul_version }}_{{ ansible_system | lower }}_{{ consul_arch }}.zip
consul_uri: https://releases.hashicorp.com/consul/{{ consul_version }}/consul_{{ consul_version }}_{{ ansible_facts.system | lower }}_{{ consul_arch }}.zip
consul_cmd: '{{ remote_tmp_dir }}/consul'
block:
- name: Install requests<2.20 (CentOS/RHEL 6)
@@ -20,7 +20,7 @@
extra_args: "-c {{ remote_constraints }}"
register: result
until: result is success
when: ansible_distribution_file_variety|default() == 'RedHat' and ansible_distribution_major_version is version('6', '<=')
when: ansible_facts.distribution_file_variety|default() == 'RedHat' and ansible_facts.distribution_major_version is version('6', '<=')
- name: Install py-consul
pip:
name: py-consul
@@ -51,15 +51,15 @@
name: unzip
register: result
until: result is success
when: ansible_distribution != "MacOSX"
when: ansible_facts.distribution != "MacOSX"
- assert:
that: ansible_architecture in ['i386', 'x86_64', 'amd64']
that: ansible_facts.architecture in ['i386', 'x86_64', 'amd64']
- set_fact:
consul_arch: '386'
when: ansible_architecture == 'i386'
when: ansible_facts.architecture == 'i386'
- set_fact:
consul_arch: amd64
when: ansible_architecture in ['x86_64', 'amd64']
when: ansible_facts.architecture in ['x86_64', 'amd64']
- name: Download consul binary
unarchive:
src: '{{ consul_uri }}'

View File

@@ -7,9 +7,9 @@
# Fedora or RHEL >= 8
# This module requires the dnf module which is not available on RHEL 7.
- >
ansible_distribution == 'Fedora'
or (ansible_os_family == 'RedHat' and ansible_distribution != 'Fedora'
and ansible_distribution_major_version | int >= 8)
ansible_facts.distribution == 'Fedora'
or (ansible_facts.os_family == 'RedHat' and ansible_facts.distribution != 'Fedora'
and ansible_facts.distribution_major_version | int >= 8)
block:
- debug: var=copr_chroot
- name: enable copr project
@@ -66,8 +66,8 @@
when:
# Copr does not build new packages for EOL Fedoras.
- >
not (ansible_distribution == 'Fedora' and
ansible_distribution_major_version | int < 35)
not (ansible_facts.distribution == 'Fedora' and
ansible_facts.distribution_major_version | int < 35)
block:
- name: install test package from the copr
ansible.builtin.package:

View File

@@ -11,5 +11,5 @@ copr_repofile: '/etc/yum.repos.d/_copr:{{ copr_host }}:{{ copr_namespace }}:{{ c
# TODO: Fix chroot autodetection so this isn't necessary
_copr_chroot_fedora: "fedora-rawhide-x86_64"
_copr_chroot_rhelish: "epel-{{ ansible_distribution_major_version }}-x86_64"
copr_chroot: "{{ _copr_chroot_fedora if ansible_distribution == 'Fedora' else _copr_chroot_rhelish }}"
_copr_chroot_rhelish: "epel-{{ ansible_facts.distribution_major_version }}-x86_64"
copr_chroot: "{{ _copr_chroot_fedora if ansible_facts.distribution == 'Fedora' else _copr_chroot_rhelish }}"

View File

@@ -6,9 +6,9 @@
- name: bail out for non-supported platforms
meta: end_play
when:
- (ansible_os_family != "RedHat" or ansible_distribution_major_version|int < 8) # TODO: bump back to 7
- (ansible_distribution != 'CentOS' or ansible_distribution_major_version|int < 8) # TODO: remove
- ansible_os_family != "Debian"
- (ansible_facts.os_family != "RedHat" or ansible_facts.distribution_major_version|int < 8) # TODO: bump back to 7
- (ansible_facts.distribution != 'CentOS' or ansible_facts.distribution_major_version|int < 8) # TODO: remove
- ansible_facts.os_family != "Debian"
- name: install perl development package for Red Hat family
package:
@@ -17,7 +17,7 @@
- perl-App-cpanminus
state: present
become: true
when: ansible_os_family == "RedHat"
when: ansible_facts.os_family == "RedHat"
- name: install perl development package for Debian family
package:
@@ -25,7 +25,7 @@
- cpanminus
state: present
become: true
when: ansible_os_family == "Debian"
when: ansible_facts.os_family == "Debian"
- name: install a Perl package
cpanm:

View File

@@ -27,15 +27,15 @@
- bzip2
state: latest
when:
- ansible_system != 'FreeBSD'
- ansible_os_family != 'Darwin'
- ansible_os_family != 'Debian'
- ansible_facts.system != 'FreeBSD'
- ansible_facts.os_family != 'Darwin'
- ansible_facts.os_family != 'Debian'
- name: Ensure xz is present to create compressed files (Debian)
package:
name: xz-utils
state: latest
when: ansible_os_family == 'Debian'
when: ansible_facts.os_family == 'Debian'
- name: Ensure xz is present to create compressed file (macOS)
block:
@@ -58,7 +58,7 @@
environment:
HOMEBREW_NO_AUTO_UPDATE: "True"
when:
- ansible_os_family == 'Darwin'
- ansible_facts.os_family == 'Darwin'
- name: Generate compressed files
shell: |

View File

@@ -42,7 +42,7 @@
value: 1
- name: "Field 2"
value: "Text"
timestamp: "{{ ansible_date_time.iso8601 }}"
timestamp: "{{ ansible_facts.date_time.iso8601 }}"
username: Ansible Test
avatar_url: "https://avatars.githubusercontent.com/u/44586252?s=200&v=4"
register: result

View File

@@ -7,6 +7,6 @@
- include_tasks: install.yml
- include_tasks: lock_bash.yml
- include_tasks: lock_updates.yml
when: (ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('23', '>=')) or
(ansible_distribution in ['RedHat', 'CentOS'] and ansible_distribution_major_version is version('8', '>='))
when: (ansible_facts.distribution == 'Fedora' and ansible_facts.distribution_major_version is version('23', '>=')) or
(ansible_facts.distribution in ['RedHat', 'CentOS'] and ansible_facts.distribution_major_version is version('8', '>='))
...

View File

@@ -10,4 +10,4 @@
- name: "include tasks for Debian family"
include_tasks: prepare.yml
when: ansible_pkg_mgr == "apt"
when: ansible_facts.pkg_mgr == "apt"

View File

@@ -14,9 +14,9 @@
# TODO: remove Ubuntu 24.04 (noble) from the list
# TODO: remove Debian 13 (Trixie) from the list
when: >
ansible_distribution in ('Alpine', 'openSUSE Leap', 'CentOS', 'Fedora', 'Archlinux')
or (ansible_distribution == 'Ubuntu' and ansible_distribution_release in ['noble'])
or (ansible_distribution == 'Debian' and ansible_distribution_major_version == '13')
ansible_facts.distribution in ('Alpine', 'openSUSE Leap', 'CentOS', 'Fedora', 'Archlinux')
or (ansible_facts.distribution == 'Ubuntu' and ansible_facts.distribution_release in ['noble'])
or (ansible_facts.distribution == 'Debian' and ansible_facts.distribution_major_version == '13')
- name: Remove ejabberd
ansible.builtin.package:
@@ -47,12 +47,12 @@
loop:
- PrivateDevices
- AmbientCapabilities
when: ansible_distribution == 'Archlinux'
when: ansible_facts.distribution == 'Archlinux'
- name: Make installable on Arch
systemd:
daemon_reload: true
when: ansible_distribution == 'Archlinux'
when: ansible_facts.distribution == 'Archlinux'
- ansible.builtin.service:
name: ejabberd

View File

@@ -15,4 +15,4 @@
- name: run_tests for supported distros
include_tasks: run_tests.yml
when:
- ansible_distribution | lower ~ "-" ~ ansible_distribution_major_version | lower != 'centos-6'
- ansible_facts.distribution | lower ~ "-" ~ ansible_facts.distribution_major_version | lower != 'centos-6'

View File

@@ -32,7 +32,7 @@
- name: Include tasks to test playing with sparse files
include_tasks: sparse.yml
when:
- not (ansible_os_family == 'Darwin' and ansible_distribution_version is version('11', '<'))
- not (ansible_facts.os_family == 'Darwin' and ansible_facts.distribution_version is version('11', '<'))
- name: Include tasks to test playing with symlinks
include_tasks: symlinks.yml

View File

@@ -55,7 +55,7 @@
- name: "Assert that filesystem UUID is changed"
# libblkid gets no UUID at all for this fstype on FreeBSD
when: not (ansible_system == 'FreeBSD' and fstype == 'reiserfs')
when: not (ansible_facts.system == 'FreeBSD' and fstype == 'reiserfs')
ansible.builtin.assert:
that:
- 'fs3_result is changed'
@@ -102,8 +102,8 @@
- when:
- (grow | bool and (fstype != "vfat" or resize_vfat)) or
(fstype == "xfs" and ansible_system == "Linux" and
ansible_distribution not in ["CentOS", "Ubuntu"])
(fstype == "xfs" and ansible_facts.system == "Linux" and
ansible_facts.distribution not in ["CentOS", "Ubuntu"])
block:
- name: "Check that resizefs does nothing if device size is not changed"
community.general.filesystem:

View File

@@ -18,7 +18,7 @@
vars:
search:
files:
- '{{ ansible_distribution }}-{{ ansible_distribution_version }}.yml'
- '{{ ansible_facts.distribution }}-{{ ansible_facts.distribution_version }}.yml'
- 'default.yml'
paths:
- '../vars/'
@@ -36,54 +36,54 @@
# Not available: btrfs, lvm, f2fs, ocfs2
# All BSD systems use swap fs, but only Linux needs mkswap
# Supported: ext2/3/4 (e2fsprogs), xfs (xfsprogs), reiserfs (progsreiserfs), vfat
- 'not (ansible_system == "FreeBSD" and item.0.key in ["bcachefs", "btrfs", "f2fs", "swap", "lvm", "ocfs2"])'
- 'not (ansible_facts.system == "FreeBSD" and item.0.key in ["bcachefs", "btrfs", "f2fs", "swap", "lvm", "ocfs2"])'
# Available on FreeBSD but not on testbed (util-linux conflicts with e2fsprogs): wipefs, mkfs.minix
- 'not (ansible_system == "FreeBSD" and item.1 in ["overwrite_another_fs", "remove_fs"])'
- 'not (ansible_facts.system == "FreeBSD" and item.1 in ["overwrite_another_fs", "remove_fs"])'
# Linux limited support
# Not available: ufs (this is FreeBSD's native fs)
- 'not (ansible_system == "Linux" and item.0.key == "ufs")'
- 'not (ansible_facts.system == "Linux" and item.0.key == "ufs")'
# Other limitations and corner cases
# bcachefs only on Alpine > 3.18 and Arch Linux for now
# other distributions have too old versions of bcachefs-tools and/or util-linux (blkid for UUID tests)
- 'ansible_distribution == "Alpine" and ansible_distribution_version is version("3.18", ">") and item.0.key == "bcachefs"'
- 'ansible_distribution == "Archlinux" and item.0.key == "bcachefs"'
- 'ansible_facts.distribution == "Alpine" and ansible_facts.distribution_version is version("3.18", ">") and item.0.key == "bcachefs"'
- 'ansible_facts.distribution == "Archlinux" and item.0.key == "bcachefs"'
# f2fs-tools and reiserfs-utils packages not available with RHEL/CentOS on CI
- 'not (ansible_distribution in ["CentOS", "RedHat"] and item.0.key in ["f2fs", "reiserfs"])'
- 'not (ansible_os_family == "RedHat" and ansible_distribution_major_version is version("8", ">=") and
- 'not (ansible_facts.distribution in ["CentOS", "RedHat"] and item.0.key in ["f2fs", "reiserfs"])'
- 'not (ansible_facts.os_family == "RedHat" and ansible_facts.distribution_major_version is version("8", ">=") and
item.0.key == "btrfs")'
# reiserfs-utils package not available with Fedora 35 on CI
- 'not (ansible_distribution == "Fedora" and (ansible_facts.distribution_major_version | int >= 35) and
- 'not (ansible_facts.distribution == "Fedora" and (ansible_facts.distribution_major_version | int >= 35) and
item.0.key == "reiserfs")'
# reiserfs packages apparently not available with Alpine
- 'not (ansible_distribution == "Alpine" and item.0.key == "reiserfs")'
- 'not (ansible_facts.distribution == "Alpine" and item.0.key == "reiserfs")'
# reiserfsprogs packages no longer available with Arch Linux
- 'not (ansible_distribution == "Archlinux" and item.0.key == "reiserfs")'
- 'not (ansible_facts.distribution == "Archlinux" and item.0.key == "reiserfs")'
# ocfs2 only available on Debian based distributions
- 'not (item.0.key == "ocfs2" and ansible_os_family != "Debian")'
- 'not (item.0.key == "ocfs2" and ansible_facts.os_family != "Debian")'
# Tests use losetup which can not be used inside unprivileged container
- 'not (item.0.key == "lvm" and ansible_virtualization_type in ["docker", "container", "containerd"])'
- 'not (item.0.key == "lvm" and ansible_facts.virtualization_type in ["docker", "container", "containerd"])'
# vfat resizing fails on Debian (but not Ubuntu)
- 'not (item.0.key == "vfat" and ansible_distribution == "Debian")' # TODO: figure out why it fails, fix it!
- 'not (item.0.key == "vfat" and ansible_facts.distribution == "Debian")' # TODO: figure out why it fails, fix it!
# vfat resizing fails on ArchLinux
- 'not (item.0.key == "vfat" and ansible_distribution == "Archlinux")' # TODO: figure out why it fails, fix it!
- 'not (item.0.key == "vfat" and ansible_facts.distribution == "Archlinux")' # TODO: figure out why it fails, fix it!
# vfat resizing fails on Ubuntu 22.04
- 'not (item.0.key == "vfat" and ansible_distribution == "Ubuntu" and (ansible_facts.distribution_major_version | int == 22))'
- 'not (item.0.key == "vfat" and ansible_facts.distribution == "Ubuntu" and (ansible_facts.distribution_major_version | int == 22))'
# TODO: figure out why it fails, fix it!
# btrfs-progs cannot be installed on ArchLinux
- 'not (item.0.key == "btrfs" and ansible_distribution == "Archlinux")' # TODO: figure out why it fails, fix it!
- 'not (item.0.key == "btrfs" and ansible_facts.distribution == "Archlinux")' # TODO: figure out why it fails, fix it!
# On CentOS 6 shippable containers, wipefs seems unable to remove vfat signatures
- 'not (ansible_distribution == "CentOS" and ansible_distribution_version is version("7.0", "<") and
- 'not (ansible_facts.distribution == "CentOS" and ansible_facts.distribution_version is version("7.0", "<") and
item.1 == "remove_fs" and item.0.key == "vfat")'
# On same systems, mkfs.minix (unhandled by the module) can't find the device/file
- 'not (ansible_distribution == "CentOS" and ansible_distribution_version is version("7.0", "<") and
- 'not (ansible_facts.distribution == "CentOS" and ansible_facts.distribution_version is version("7.0", "<") and
item.1 == "overwrite_another_fs")'
# TODO: something seems to be broken on Alpine
- 'not (ansible_distribution == "Alpine")'
- 'not (ansible_facts.distribution == "Alpine")'
loop: "{{ query('dict', tested_filesystems)|product(['create_fs', 'reset_fs_uuid', 'overwrite_another_fs', 'remove_fs', 'set_fs_uuid_on_creation', 'set_fs_uuid_on_creation_with_opts'])|list }}"
@@ -92,8 +92,8 @@
- include_tasks: freebsd_setup.yml
when:
- 'ansible_system == "FreeBSD"'
- 'ansible_distribution_version is version("12.2", ">=")'
- 'ansible_facts.system == "FreeBSD"'
- 'ansible_facts.distribution_version is version("12.2", ">=")'
- include_tasks: create_device.yml
vars:
@@ -103,7 +103,7 @@
grow: '{{ item.0.value.grow }}'
action: '{{ item.1 }}'
when:
- 'ansible_system == "FreeBSD"'
- 'ansible_distribution_version is version("12.2", ">=")'
- 'ansible_facts.system == "FreeBSD"'
- 'ansible_facts.distribution_version is version("12.2", ">=")'
- 'item.0.key in ["xfs", "vfat"]'
loop: "{{ query('dict', tested_filesystems)|product(['create_fs', 'overwrite_another_fs', 'remove_fs'])|list }}"

View File

@@ -6,7 +6,7 @@
# Skip UUID reset tests for FreeBSD due to "xfs_admin: only 'rewrite' supported on V5 fs"
- when:
- new_uuid | default(False)
- not (ansible_system == "FreeBSD" and fstype == "xfs")
- not (ansible_facts.system == "FreeBSD" and fstype == "xfs")
block:
- name: "Create filesystem ({{ fstype }})"
community.general.filesystem:
@@ -42,8 +42,8 @@
- when:
- (grow | bool and (fstype != "vfat" or resize_vfat)) or
(fstype == "xfs" and ansible_system == "Linux" and
ansible_distribution not in ["CentOS", "Ubuntu"])
(fstype == "xfs" and ansible_facts.system == "Linux" and
ansible_facts.distribution not in ["CentOS", "Ubuntu"])
block:
- name: "Reset filesystem ({{ fstype }}) UUID and resizefs"
ignore_errors: true

View File

@@ -10,7 +10,7 @@
# Skip UUID set at creation tests for FreeBSD due to "xfs_admin: only 'rewrite' supported on V5 fs"
- when:
- new_uuid | default(False)
- not (ansible_system == "FreeBSD" and fstype == "xfs")
- not (ansible_facts.system == "FreeBSD" and fstype == "xfs")
block:
- name: "Create filesystem ({{ fstype }}) with UUID"
community.general.filesystem:

View File

@@ -21,27 +21,27 @@
when:
# bcachefs only on Alpine > 3.18 and Arch Linux for now
# other distributions have too old versions of bcachefs-tools and/or util-linux (blkid for UUID tests)
- ansible_distribution == "Alpine" and ansible_distribution_version is version("3.18", ">")
- ansible_distribution == "Archlinux"
- ansible_facts.distribution == "Alpine" and ansible_facts.distribution_version is version("3.18", ">")
- ansible_facts.distribution == "Archlinux"
- name: "Install btrfs progs"
ansible.builtin.package:
name: btrfs-progs
state: present
when:
- ansible_os_family != 'Suse'
- not (ansible_distribution == 'Ubuntu' and ansible_distribution_version is version('16.04', '<='))
- ansible_system != "FreeBSD"
- ansible_facts.os_family != 'Suse'
- not (ansible_facts.distribution == 'Ubuntu' and ansible_facts.distribution_version is version('16.04', '<='))
- ansible_facts.system != "FreeBSD"
- not (ansible_facts.os_family == "RedHat" and ansible_facts.distribution_major_version is version('8', '>='))
- ansible_os_family != 'Archlinux' # TODO
- ansible_facts.os_family != 'Archlinux' # TODO
- name: "Install btrfs tools (Ubuntu <= 16.04)"
ansible.builtin.package:
name: btrfs-tools
state: present
when:
- ansible_distribution == 'Ubuntu'
- ansible_distribution_version is version('16.04', '<=')
- ansible_facts.distribution == 'Ubuntu'
- ansible_facts.distribution_version is version('16.04', '<=')
- name: "Install btrfs progs (OpenSuse)"
ansible.builtin.package:
@@ -49,14 +49,14 @@
- python3-xml
- btrfsprogs
state: present
when: ansible_os_family == 'Suse'
when: ansible_facts.os_family == 'Suse'
- name: "Install reiserfs utils (Fedora)"
ansible.builtin.package:
name: reiserfs-utils
state: present
when:
- ansible_distribution == 'Fedora' and (ansible_facts.distribution_major_version | int < 35)
- ansible_facts.distribution == 'Fedora' and (ansible_facts.distribution_major_version | int < 35)
- name: "Install reiserfs and util-linux-systemd (for findmnt) (OpenSuse)"
ansible.builtin.package:
@@ -65,34 +65,34 @@
- util-linux-systemd
state: present
when:
- ansible_os_family == 'Suse'
- ansible_facts.os_family == 'Suse'
- name: "Install reiserfs progs (Debian and more)"
ansible.builtin.package:
name: reiserfsprogs
state: present
when:
- ansible_system == 'Linux'
- ansible_os_family not in ['Suse', 'RedHat', 'Alpine', 'Archlinux']
- ansible_facts.system == 'Linux'
- ansible_facts.os_family not in ['Suse', 'RedHat', 'Alpine', 'Archlinux']
- name: "Install reiserfs progs (FreeBSD)"
ansible.builtin.package:
name: progsreiserfs
state: present
when:
- ansible_system == 'FreeBSD'
- ansible_facts.system == 'FreeBSD'
- name: "Install ocfs2 (Debian)"
ansible.builtin.package:
name: ocfs2-tools
state: present
when: ansible_os_family == 'Debian'
when: ansible_facts.os_family == 'Debian'
- name: "Install f2fs tools and get version"
when:
- ansible_os_family != 'RedHat' or ansible_distribution == 'Fedora'
- ansible_distribution != 'Ubuntu' or ansible_distribution_version is version('16.04', '>=')
- ansible_system != "FreeBSD"
- ansible_facts.os_family != 'RedHat' or ansible_facts.distribution == 'Fedora'
- ansible_facts.distribution != 'Ubuntu' or ansible_facts.distribution_version is version('16.04', '>=')
- ansible_facts.system != "FreeBSD"
block:
- name: "Install f2fs tools"
ansible.builtin.package:
@@ -117,14 +117,14 @@
name:
- dosfstools
- lvm2
when: ansible_system == 'Linux'
when: ansible_facts.system == 'Linux'
- name: "Install fatresize and get version"
when:
- ansible_system == 'Linux'
- ansible_os_family != 'Suse'
- ansible_os_family != 'RedHat' or (ansible_distribution == 'CentOS' and ansible_distribution_version is version('7.0', '=='))
- ansible_os_family != 'Alpine'
- ansible_facts.system == 'Linux'
- ansible_facts.os_family != 'Suse'
- ansible_facts.os_family != 'RedHat' or (ansible_facts.distribution == 'CentOS' and ansible_facts.distribution_version is version('7.0', '=='))
- ansible_facts.os_family != 'Alpine'
block:
- name: "Install fatresize"
ansible.builtin.package:

View File

@@ -3,64 +3,58 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: 'Define ini_test_dict'
ansible.builtin.set_fact:
ini_test_dict:
section_name:
key_name: 'key value'
another_section:
connection: 'ssh'
interpolate_test:
interpolate_test_key: '%'
- name: 'Write INI file that reflects ini_test_dict to {{ ini_test_file }}'
ansible.builtin.copy:
dest: '{{ ini_test_file }}'
content: |
- name: Basic test
ansible.builtin.assert:
that:
- ini_file_content | community.general.from_ini == ini_test_dict
vars:
ini_file_content: |
[section_name]
key_name = key value
[another_section]
connection = ssh
[empty section]
[interpolate_test]
interpolate_test_key = %
ini_test_dict:
section_name:
key_name: key value
- name: 'Slurp the test file: {{ ini_test_file }}'
ansible.builtin.slurp:
src: '{{ ini_test_file }}'
register: 'ini_file_content'
another_section:
connection: ssh
- name: >-
Ensure defined ini_test_dict is the same when retrieved
from {{ ini_test_file }}
empty section: {}
interpolate_test:
interpolate_test_key: '%'
- name: Test delimiters
ansible.builtin.assert:
that:
- 'ini_file_content.content | b64decode | community.general.from_ini ==
ini_test_dict'
- ini_file_content | community.general.from_ini(delimiters=["="]) == ini_test_dict
vars:
ini_file_content: |
[section_name]
key_name * : with spaces = key value
ini_test_dict:
section_name:
'key_name * : with spaces': 'key value'
- name: 'Create a file that is not INI formatted: {{ ini_bad_file }}'
ansible.builtin.copy:
dest: '{{ ini_bad_file }}'
content: |
Testing a not INI formatted file.
- name: 'Slurp the file that is not INI formatted: {{ ini_bad_file }}'
ansible.builtin.slurp:
src: '{{ ini_bad_file }}'
register: 'ini_bad_file_content'
- name: 'Try parsing the bad file with from_ini: {{ ini_bad_file }}'
- name: Try parsing the bad file with from_ini
ansible.builtin.debug:
var: ini_bad_file_content | b64decode | community.general.from_ini
register: 'ini_bad_file_debug'
var: ini_bad_file_content | community.general.from_ini
vars:
ini_bad_file_content: |
Testing a not INI formatted file.
register: ini_bad_file_debug
ignore_errors: true
- name: 'Ensure from_ini raised the correct exception'
ansible.builtin.assert:
that:
- ini_bad_file_debug is failed
- "'from_ini failed to parse given string' in ini_bad_file_debug.msg"
- "'File contains no section headers' in ini_bad_file_debug.msg"
...

View File

@@ -1,8 +0,0 @@
---
# Copyright (c) 2023, Steffen Scheib <steffen@scheib.me>
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
ini_test_file: '/tmp/test.ini'
ini_bad_file: '/tmp/bad.file'
...

View File

@@ -3,12 +3,6 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: Debug ansible_version
ansible.builtin.debug:
var: ansible_version
when: not (quiet_test | default(true) | bool)
tags: ansible_version
- name: Tests
ansible.builtin.assert:
that:

View File

@@ -3,12 +3,6 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: Debug ansible_version
debug:
var: ansible_version
when: debug_test|default(false)|bool
tags: t0
- name: 1. Test lists merged by attribute name
block:
- name: Test lists merged by attribute name debug

View File

@@ -3,12 +3,6 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: Debug ansible_version
ansible.builtin.debug:
var: ansible_version
when: not (quiet_test | default(true) | bool)
tags: ansible_version
- name: Tests
ansible.builtin.assert:
that:

View File

@@ -3,12 +3,6 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- name: Debug ansible_version
ansible.builtin.debug:
var: ansible_version
when: not (quiet_test | default(true) | bool)
tags: ansible_version
- name: Tests
ansible.builtin.assert:
that:

View File

@@ -60,5 +60,5 @@
pkill -f -- '{{ remote_tmp_dir }}/serve.py'
when: |
ansible_distribution == 'Fedora' or
ansible_distribution == 'Ubuntu' and not ansible_distribution_major_version | int < 16
ansible_facts.distribution == 'Fedora' or
ansible_facts.distribution == 'Ubuntu' and not ansible_facts.distribution_major_version | int < 16

View File

@@ -8,7 +8,7 @@
name: flatpak
state: present
become: true
when: ansible_distribution == 'Fedora'
when: ansible_facts.distribution == 'Fedora'
- block:
- name: Activate flatpak ppa on Ubuntu
@@ -16,14 +16,14 @@
repo: ppa:alexlarsson/flatpak
state: present
mode: '0644'
when: ansible_lsb.major_release | int < 18
when: ansible_facts.lsb.major_release | int < 18
- name: Install flatpak package on Ubuntu
apt:
name: flatpak
state: present
when: ansible_distribution == 'Ubuntu'
when: ansible_facts.distribution == 'Ubuntu'
- name: Install dummy remote for user
flatpak_remote:
@@ -62,7 +62,7 @@
mode: '0755'
- name: Start HTTP server
command: '{{ ansible_python.executable }} {{ remote_tmp_dir }}/serve.py 127.0.0.1 8000 /tmp/flatpak/'
command: '{{ ansible_facts.python.executable }} {{ remote_tmp_dir }}/serve.py 127.0.0.1 8000 /tmp/flatpak/'
async: 120
poll: 0
register: webserver_status

View File

@@ -46,5 +46,5 @@
method: system
when: |
ansible_distribution == 'Fedora' or
ansible_distribution == 'Ubuntu' and not ansible_distribution_major_version | int < 16
ansible_facts.distribution == 'Fedora' or
ansible_facts.distribution == 'Ubuntu' and not ansible_facts.distribution_major_version | int < 16

View File

@@ -7,19 +7,19 @@
dnf:
name: flatpak
state: present
when: ansible_distribution == 'Fedora'
when: ansible_facts.distribution == 'Fedora'
- block:
- name: Activate flatpak ppa on Ubuntu versions older than 18.04/bionic
apt_repository:
repo: ppa:alexlarsson/flatpak
state: present
mode: '0644'
when: ansible_lsb.major_release | int < 18
when: ansible_facts.lsb.major_release | int < 18
- name: Install flatpak package on Ubuntu
apt:
name: flatpak
state: present
when: ansible_distribution == 'Ubuntu'
when: ansible_facts.distribution == 'Ubuntu'
- name: Install flatpak remote for testing check mode
flatpak_remote:
name: check-mode-test-remote

View File

@@ -10,16 +10,16 @@
# SPDX-License-Identifier: GPL-3.0-or-later
- when:
- not (ansible_os_family == 'Alpine') # TODO
- not (ansible_facts.os_family == 'Alpine') # TODO
block:
- include_vars: '{{ item }}'
with_first_found:
- files:
- '{{ ansible_distribution }}-{{ ansible_distribution_major_version }}.yml'
- '{{ ansible_distribution }}-{{ ansible_distribution_version }}.yml'
- '{{ ansible_distribution }}.yml'
- '{{ ansible_os_family }}.yml'
- '{{ ansible_facts.distribution }}-{{ ansible_facts.distribution_major_version }}.yml'
- '{{ ansible_facts.distribution }}-{{ ansible_facts.distribution_version }}.yml'
- '{{ ansible_facts.distribution }}.yml'
- '{{ ansible_facts.os_family }}.yml'
- 'default.yml'
paths: '../vars'
@@ -28,7 +28,7 @@
name: "{{ item }}"
state: present
loop: "{{ test_packages }}"
when: ansible_distribution != "MacOSX"
when: ansible_facts.distribution != "MacOSX"
- name: Install a gem
gem:
@@ -44,7 +44,7 @@
msg: "failed to install gem: {{ install_gem_result.msg }}"
when:
- install_gem_result is failed
- not (ansible_user_uid == 0 and "User --install-dir or --user-install but not both" not in install_gem_result.msg)
- not (ansible_facts.user_uid == 0 and "User --install-dir or --user-install but not both" not in install_gem_result.msg)
- block:
- name: List gems
@@ -108,7 +108,7 @@
that:
- remove_gem_results is changed
- current_gems.stdout is not search('gist\s+\([0-9.]+\)')
when: ansible_user_uid == 0
when: ansible_facts.user_uid == 0
# Check custom gem directory
- name: Install gem in a custom directory with incorrect options
@@ -217,12 +217,12 @@
community.general.gem:
name: json
state: absent
when: ansible_distribution == "Ubuntu"
when: ansible_facts.distribution == "Ubuntu"
register: gem_result
ignore_errors: true
- name: Assert gem uninstall failed as expected
when: ansible_distribution == "Ubuntu"
when: ansible_facts.distribution == "Ubuntu"
assert:
that:
- gem_result is failed

View File

@@ -9,7 +9,7 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- when: ansible_distribution in ['MacOSX']
- when: ansible_facts.distribution in ['MacOSX']
block:
- include_tasks: 'formulae.yml'
- include_tasks: 'casks.yml'

View File

@@ -12,13 +12,13 @@
- name: Find brew binary
command: which brew
register: brew_which
when: ansible_distribution in ['MacOSX']
when: ansible_facts.distribution in ['MacOSX']
- name: Get owner of brew binary
stat:
path: "{{ brew_which.stdout }}"
register: brew_stat
when: ansible_distribution in ['MacOSX']
when: ansible_facts.distribution in ['MacOSX']
- block:
- name: Install cask

View File

@@ -18,7 +18,7 @@
- name: Install legacycrypt on Python 3.13+
pip:
name: legacycrypt
when: ansible_python_version is version("3.13", ">=")
when: ansible_facts.python_version is version("3.13", ">=")
- name: Check and start systemd-homed service
service:
@@ -183,5 +183,5 @@
# homectl was first introduced in systemd 245 so check version >= 245 and make sure system has systemd and homectl command
when:
- systemd_version.rc == 0 and (systemd_version.stdout | regex_search('[0-9][0-9][0-9]') | int >= 245) and homectl_version.rc == 0
- ansible_distribution != 'Archlinux' # TODO!
- ansible_distribution != 'Fedora' or ansible_distribution_major_version|int < 36 # TODO!
- ansible_facts.distribution != 'Archlinux' # TODO!
- ansible_facts.distribution != 'Fedora' or ansible_facts.distribution_major_version|int < 36 # TODO!

View File

@@ -9,4 +9,4 @@
# SPDX-License-Identifier: GPL-3.0-or-later
- include_tasks: tests.yml
when: ansible_distribution == 'Ubuntu' and ansible_distribution_release == 'trusty'
when: ansible_facts.distribution == 'Ubuntu' and ansible_facts.distribution_release == 'trusty'

View File

@@ -14,7 +14,7 @@
- set_fact:
validate_certs: false
when: (ansible_distribution == "MacOSX" and ansible_distribution_version == "10.11.1")
when: (ansible_facts.distribution == "MacOSX" and ansible_facts.distribution_version == "10.11.1")
- name: get information about current IP using ipify facts
ipify_facts:

View File

@@ -52,12 +52,12 @@
- set_fact:
file_contents: "{{ get_file_content.stdout }}"
when: ansible_distribution == 'RedHat' and ansible_distribution_version is version('7.9', '==')
when: ansible_facts.distribution == 'RedHat' and ansible_facts.distribution_version is version('7.9', '==')
- name: Get the content of file test02.cfg
set_fact:
file_contents: "{{ lookup('file', mount_root_dir + '/test02.cfg') }}"
when: not (ansible_distribution == 'RedHat' and ansible_distribution_version is version('7.9', '=='))
when: not (ansible_facts.distribution == 'RedHat' and ansible_facts.distribution_version is version('7.9', '=='))
- fail: msg="Failed to replace the file test02.cfg"
when: file_contents != "test"

View File

@@ -10,7 +10,7 @@
delete_files:
- "test01.cfg"
- debug: var=ansible_distribution
- debug: var=ansible_facts.distribution
- include_tasks: iso_mount.yml
vars:

View File

@@ -3,7 +3,7 @@
# GNU General Public License v3.0+ (see LICENSES/GPL-3.0-or-later.txt or https://www.gnu.org/licenses/gpl-3.0.txt)
# SPDX-License-Identifier: GPL-3.0-or-later
- debug: var=ansible_distribution
- debug: var=ansible_facts.distribution
- block:
- name: "Mount customized ISO on MAC"
@@ -20,7 +20,7 @@
- set_fact:
mount_root_dir: "{{ test_dir }}/iso_mount/CDROM"
when: iso_name.find('udf') != -1
when: ansible_distribution == "MacOSX"
when: ansible_facts.distribution == "MacOSX"
- block:
- name: "Mount {{ iso_name }} to {{ test_dir }}/iso_mount on localhost"
@@ -36,4 +36,4 @@
- set_fact:
mount_root_dir: "{{ test_dir }}/iso_mount"
when:
- ansible_distribution != "MacOSX"
- ansible_facts.distribution != "MacOSX"

View File

@@ -10,7 +10,7 @@
- name: Skip some platforms which does not support ansible.posix.mount
meta: end_play
when: ansible_distribution in ['Alpine']
when: ansible_facts.distribution in ['Alpine']
- set_fact:
test_dir: '{{ remote_tmp_dir }}/test_iso_customize'

View File

@@ -33,13 +33,13 @@
- name: MACOS | Find brew binary
command: which brew
register: brew_which
when: ansible_distribution in ['MacOSX']
when: ansible_facts.distribution in ['MacOSX']
- name: MACOS | Get owner of brew binary
stat:
path: "{{ brew_which.stdout }}"
register: brew_stat
when: ansible_distribution in ['MacOSX']
when: ansible_facts.distribution in ['MacOSX']
- name: MACOS | Install 7zip package
homebrew:

View File

@@ -17,14 +17,14 @@
- name: Doesn't work with Fedora 43 for some reason
meta: end_play
when:
- ansible_distribution == 'Fedora' and ansible_distribution_major_version is version('43', '==')
- ansible_facts.distribution == 'Fedora' and ansible_facts.distribution_major_version is version('43', '==')
- name: Install EPEL repository (RHEL only)
include_role:
name: setup_epel
when:
- ansible_distribution in ['RedHat', 'CentOS']
- ansible_distribution_major_version is version('9', '<')
- ansible_facts.distribution in ['RedHat', 'CentOS']
- ansible_facts.distribution_major_version is version('9', '<')
- name: Install 7zip
import_tasks: 7zip.yml

View File

@@ -49,7 +49,7 @@
certificate_path: "{{ test_cert_path }}"
privatekey_path: "{{ test_key_path }}"
when:
- "not (ansible_os_family == 'RedHat' and ansible_distribution_version is version('8.0', '<'))"
- "not (ansible_facts.os_family == 'RedHat' and ansible_facts.distribution_version is version('8.0', '<'))"
- name: Create the pkcs12 archive from the test x509 cert (command)
ansible.builtin.command:
@@ -62,8 +62,8 @@
-passout stdin
stdin: "{{ test_keystore2_password }}"
when:
- "ansible_os_family == 'RedHat'"
- "ansible_distribution_version is version('8.0', '<')"
- "ansible_facts.os_family == 'RedHat'"
- "ansible_facts.distribution_version is version('8.0', '<')"
- name: Create the pkcs12 archive from the certificate we will be trying to add to the keystore
community.crypto.openssl_pkcs12:
@@ -73,7 +73,7 @@
certificate_path: "{{ test_cert2_path }}"
privatekey_path: "{{ test_key2_path }}"
when:
- "not (ansible_os_family == 'RedHat' and ansible_distribution_version is version('8.0', '<'))"
- "not (ansible_facts.os_family == 'RedHat' and ansible_facts.distribution_version is version('8.0', '<'))"
- name: Create the pkcs12 archive from the certificate we will be trying to add to the keystore (command)
ansible.builtin.command:
@@ -86,8 +86,8 @@
-passout stdin
stdin: "{{ test_keystore2_password }}"
when:
- "ansible_os_family == 'RedHat'"
- "ansible_distribution_version is version('8.0', '<')"
- "ansible_facts.os_family == 'RedHat'"
- "ansible_facts.distribution_version is version('8.0', '<')"
#
# Run tests
@@ -246,7 +246,7 @@
dest: "{{ remote_tmp_dir }}"
- name: Create an SSL server that we will use for testing URL imports
command: "{{ ansible_python.executable }} {{ remote_tmp_dir }}/setupSSLServer.py {{ remote_tmp_dir }} {{ test_ssl_port }}"
command: "{{ ansible_facts.python.executable }} {{ remote_tmp_dir }}/setupSSLServer.py {{ remote_tmp_dir }} {{ test_ssl_port }}"
async: 10
poll: 0

View File

@@ -456,4 +456,74 @@
- end_state.attributes["backchannel.logout.session.required"] == 'false'
- end_state.attributes["oauth2.device.authorization.grant.enabled"] == 'false'
vars:
end_state: "{{ check_client_when_present_and_attributes_modified.end_state }}"
end_state: "{{ check_client_when_present_and_attributes_modified.end_state }}"
# ---- Tests for valid_post_logout_redirect_uris and backchannel_logout_url ----
- name: Create client with post logout redirect URIs and backchannel logout URL
community.general.keycloak_client:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
realm: "{{ realm }}"
client_id: logout-test-client
valid_post_logout_redirect_uris: "{{ post_logout_redirect_uris }}"
backchannel_logout_url: "{{ backchannel_logout_url }}"
state: present
register: result_create_logout_client
- name: Assert logout client is created with correct attributes
assert:
that:
- result_create_logout_client is changed
- result_create_logout_client.end_state.attributes["post.logout.redirect.uris"] is defined
- result_create_logout_client.end_state.attributes["backchannel.logout.url"] == backchannel_logout_url
- name: Re-create client with same logout fields (idempotency)
community.general.keycloak_client:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
realm: "{{ realm }}"
client_id: logout-test-client
valid_post_logout_redirect_uris: "{{ post_logout_redirect_uris }}"
backchannel_logout_url: "{{ backchannel_logout_url }}"
state: present
register: result_idempotent_logout_client
- name: Assert logout client is idempotent
assert:
that:
- result_idempotent_logout_client is not changed
- name: Update client logout fields
community.general.keycloak_client:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
realm: "{{ realm }}"
client_id: logout-test-client
valid_post_logout_redirect_uris:
- "https://example.com/new-logout"
backchannel_logout_url: "https://example.com/new-backchannel"
state: present
register: result_update_logout_client
- name: Assert logout client fields are updated
assert:
that:
- result_update_logout_client is changed
- result_update_logout_client.end_state.attributes["backchannel.logout.url"] == "https://example.com/new-backchannel"
- name: Delete logout test client
community.general.keycloak_client:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
realm: "{{ realm }}"
client_id: logout-test-client
state: absent

View File

@@ -20,6 +20,11 @@ auth_args:
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
post_logout_redirect_uris:
- "https://example.com/logout-callback"
- "https://example.com/signout"
backchannel_logout_url: "https://example.com/backchannel-logout"
redirect_uris1:
- "http://example.c.com/"
- "http://example.b.com/"

View File

@@ -364,6 +364,996 @@
- result.end_state.config.priority == ["150"]
- result.msg == "Realm key testkey_with_certificate was in sync"
# ============================================================
# Tests for auto-generated key providers
# ============================================================
- name: Create HMAC key (hmac-generated provider, check mode)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: hmac-test-key
state: present
parent_id: "{{ realm }}"
provider_id: hmac-generated
config:
enabled: true
active: true
priority: 100
algorithm: HS256
secret_size: 64
check_mode: true
register: result
- name: Assert HMAC key would be created
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "hmac-test-key"
- result.end_state.providerId == "hmac-generated"
- result.end_state.config.algorithm == ["HS256"]
- result.end_state.config.secretSize == ["64"]
- result.msg == "Realm key hmac-test-key would be created"
- name: Create HMAC key (hmac-generated provider)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: hmac-test-key
state: present
parent_id: "{{ realm }}"
provider_id: hmac-generated
config:
enabled: true
active: true
priority: 100
algorithm: HS256
secret_size: 64
register: result
- name: Assert HMAC key was created
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "hmac-test-key"
- result.end_state.providerId == "hmac-generated"
- result.end_state.providerType == "org.keycloak.keys.KeyProvider"
- result.end_state.config.algorithm == ["HS256"]
- result.end_state.config.secretSize == ["64"]
- result.msg == "Realm key hmac-test-key created"
- name: Create HMAC key (test for idempotency)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: hmac-test-key
state: present
parent_id: "{{ realm }}"
provider_id: hmac-generated
config:
enabled: true
active: true
priority: 100
algorithm: HS256
secret_size: 64
register: result
- name: Assert HMAC key is in sync
assert:
that:
- result is not changed
- result.msg == "Realm key hmac-test-key was in sync"
- name: Update HMAC key priority
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: hmac-test-key
state: present
parent_id: "{{ realm }}"
provider_id: hmac-generated
config:
enabled: true
active: true
priority: 110
algorithm: HS256
secret_size: 64
register: result
- name: Assert HMAC key was updated
assert:
that:
- result is changed
- result.end_state.config.priority == ["110"]
- "'config.priority' in result.msg"
- name: Remove HMAC key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: hmac-test-key
state: absent
parent_id: "{{ realm }}"
provider_id: hmac-generated
config:
priority: 110
register: result
- name: Assert HMAC key was deleted
assert:
that:
- result is changed
- result.end_state == {}
- result.msg == "Realm key hmac-test-key deleted"
# ============================================================
# AES generated key tests
# ============================================================
- name: Create AES key (aes-generated provider)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: aes-test-key
state: present
parent_id: "{{ realm }}"
provider_id: aes-generated
config:
enabled: true
active: true
priority: 100
secret_size: 32
register: result
- name: Assert AES key was created
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "aes-test-key"
- result.end_state.providerId == "aes-generated"
- result.end_state.config.secretSize == ["32"]
- result.msg == "Realm key aes-test-key created"
- name: Create AES key (test for idempotency)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: aes-test-key
state: present
parent_id: "{{ realm }}"
provider_id: aes-generated
config:
enabled: true
active: true
priority: 100
secret_size: 32
register: result
- name: Assert AES key is in sync
assert:
that:
- result is not changed
- result.msg == "Realm key aes-test-key was in sync"
- name: Remove AES key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: aes-test-key
state: absent
parent_id: "{{ realm }}"
provider_id: aes-generated
config:
priority: 100
register: result
- name: Assert AES key was deleted
assert:
that:
- result is changed
- result.msg == "Realm key aes-test-key deleted"
# ============================================================
# ECDSA generated key tests
# ============================================================
- name: Create ECDSA key (ecdsa-generated provider)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: ecdsa-test-key
state: present
parent_id: "{{ realm }}"
provider_id: ecdsa-generated
config:
enabled: true
active: true
priority: 100
algorithm: ES256
elliptic_curve: P-256
register: result
- name: Assert ECDSA key was created
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "ecdsa-test-key"
- result.end_state.providerId == "ecdsa-generated"
- result.end_state.config.algorithm == ["ES256"]
- result.end_state.config.ecdsaEllipticCurveKey == ["P-256"]
- result.msg == "Realm key ecdsa-test-key created"
- name: Create ECDSA key (test for idempotency)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: ecdsa-test-key
state: present
parent_id: "{{ realm }}"
provider_id: ecdsa-generated
config:
enabled: true
active: true
priority: 100
algorithm: ES256
elliptic_curve: P-256
register: result
- name: Assert ECDSA key is in sync
assert:
that:
- result is not changed
- result.msg == "Realm key ecdsa-test-key was in sync"
- name: Remove ECDSA key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: ecdsa-test-key
state: absent
parent_id: "{{ realm }}"
provider_id: ecdsa-generated
config:
priority: 100
register: result
- name: Assert ECDSA key was deleted
assert:
that:
- result is changed
- result.msg == "Realm key ecdsa-test-key deleted"
# ============================================================
# RSA generated key tests
# ============================================================
- name: Create RSA generated key (rsa-generated provider)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: rsa-gen-test-key
state: present
parent_id: "{{ realm }}"
provider_id: rsa-generated
config:
enabled: true
active: true
priority: 100
algorithm: RS256
key_size: 2048
register: result
- name: Assert RSA generated key was created
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "rsa-gen-test-key"
- result.end_state.providerId == "rsa-generated"
- result.end_state.config.algorithm == ["RS256"]
- result.end_state.config.keySize == ["2048"]
- result.msg == "Realm key rsa-gen-test-key created"
- name: Create RSA generated key (test for idempotency)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: rsa-gen-test-key
state: present
parent_id: "{{ realm }}"
provider_id: rsa-generated
config:
enabled: true
active: true
priority: 100
algorithm: RS256
key_size: 2048
register: result
- name: Assert RSA generated key is in sync
assert:
that:
- result is not changed
- result.msg == "Realm key rsa-gen-test-key was in sync"
- name: Remove RSA generated key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: rsa-gen-test-key
state: absent
parent_id: "{{ realm }}"
provider_id: rsa-generated
config:
priority: 100
register: result
- name: Assert RSA generated key was deleted
assert:
that:
- result is changed
- result.msg == "Realm key rsa-gen-test-key deleted"
# ============================================================
# Test managing default realm keys (issue #11459)
# ============================================================
- name: Update priority of default hmac-generated key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: hmac-generated
state: present
parent_id: "{{ realm }}"
provider_id: hmac-generated
config:
enabled: true
active: true
priority: 150
register: result
- name: Assert default hmac-generated key was updated
assert:
that:
- result is changed
- result.end_state.config.priority == ["150"]
- name: Remove default hmac-generated key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: hmac-generated
state: absent
parent_id: "{{ realm }}"
provider_id: hmac-generated
config:
priority: 150
register: result
- name: Assert default hmac-generated key was deleted
assert:
that:
- result is changed
- result.end_state == {}
- result.msg == "Realm key hmac-generated deleted"
# ============================================================
# RSA encryption generated key tests (rsa-enc-generated)
# ============================================================
- name: Create RSA encryption key (rsa-enc-generated provider)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: rsa-enc-gen-test-key
state: present
parent_id: "{{ realm }}"
provider_id: rsa-enc-generated
config:
enabled: true
active: true
priority: 100
algorithm: RSA-OAEP
key_size: 2048
register: result
- name: Assert RSA encryption key was created
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "rsa-enc-gen-test-key"
- result.end_state.providerId == "rsa-enc-generated"
- result.end_state.config.algorithm == ["RSA-OAEP"]
- result.end_state.config.keySize == ["2048"]
- result.msg == "Realm key rsa-enc-gen-test-key created"
- name: Create RSA encryption key (test for idempotency)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: rsa-enc-gen-test-key
state: present
parent_id: "{{ realm }}"
provider_id: rsa-enc-generated
config:
enabled: true
active: true
priority: 100
algorithm: RSA-OAEP
key_size: 2048
register: result
- name: Assert RSA encryption key is in sync
assert:
that:
- result is not changed
- result.msg == "Realm key rsa-enc-gen-test-key was in sync"
- name: Remove RSA encryption key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: rsa-enc-gen-test-key
state: absent
parent_id: "{{ realm }}"
provider_id: rsa-enc-generated
config:
priority: 100
register: result
- name: Assert RSA encryption key was deleted
assert:
that:
- result is changed
- result.msg == "Realm key rsa-enc-gen-test-key deleted"
# ============================================================
# ECDH generated key tests (ecdh-generated)
# ============================================================
- name: Create ECDH key (ecdh-generated provider)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: ecdh-test-key
state: present
parent_id: "{{ realm }}"
provider_id: ecdh-generated
config:
enabled: true
active: true
priority: 100
algorithm: ECDH_ES
elliptic_curve: P-256
register: result
- name: Assert ECDH key was created
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "ecdh-test-key"
- result.end_state.providerId == "ecdh-generated"
- result.end_state.config.algorithm == ["ECDH_ES"]
- result.end_state.config.ecdhEllipticCurveKey == ["P-256"]
- result.msg == "Realm key ecdh-test-key created"
- name: Create ECDH key (test for idempotency)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: ecdh-test-key
state: present
parent_id: "{{ realm }}"
provider_id: ecdh-generated
config:
enabled: true
active: true
priority: 100
algorithm: ECDH_ES
elliptic_curve: P-256
register: result
- name: Assert ECDH key is in sync
assert:
that:
- result is not changed
- result.msg == "Realm key ecdh-test-key was in sync"
- name: Remove ECDH key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: ecdh-test-key
state: absent
parent_id: "{{ realm }}"
provider_id: ecdh-generated
config:
priority: 100
register: result
- name: Assert ECDH key was deleted
assert:
that:
- result is changed
- result.msg == "Realm key ecdh-test-key deleted"
# ============================================================
# EdDSA generated key tests (eddsa-generated)
# ============================================================
- name: Create EdDSA key (eddsa-generated provider)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: eddsa-test-key
state: present
parent_id: "{{ realm }}"
provider_id: eddsa-generated
config:
enabled: true
active: true
priority: 100
elliptic_curve: Ed25519
register: result
- name: Assert EdDSA key was created
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "eddsa-test-key"
- result.end_state.providerId == "eddsa-generated"
- result.end_state.config.eddsaEllipticCurveKey == ["Ed25519"]
- result.msg == "Realm key eddsa-test-key created"
- name: Create EdDSA key (test for idempotency)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: eddsa-test-key
state: present
parent_id: "{{ realm }}"
provider_id: eddsa-generated
config:
enabled: true
active: true
priority: 100
elliptic_curve: Ed25519
register: result
- name: Assert EdDSA key is in sync
assert:
that:
- result is not changed
- result.msg == "Realm key eddsa-test-key was in sync"
- name: Remove EdDSA key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: eddsa-test-key
state: absent
parent_id: "{{ realm }}"
provider_id: eddsa-generated
config:
priority: 100
register: result
- name: Assert EdDSA key was deleted
assert:
that:
- result is changed
- result.msg == "Realm key eddsa-test-key deleted"
# ============================================================
# Java Keystore provider tests (java-keystore)
# Note: These tests require a keystore file on the Keycloak server
# They are conditionally skipped if test_keystore_path is not defined
# ============================================================
- name: Create java-keystore key (check mode)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-test-key
state: present
parent_id: "{{ realm }}"
provider_id: java-keystore
config:
enabled: true
active: true
priority: 100
algorithm: RS256
keystore: "{{ test_keystore_path }}"
keystore_password: "{{ test_keystore_password }}"
key_alias: "{{ test_key_alias }}"
check_mode: true
register: result
when: test_keystore_path is defined
- name: Assert java-keystore key would be created (check mode)
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "jks-test-key"
- result.end_state.providerId == "java-keystore"
- result.end_state.config.algorithm == ["RS256"]
- result.end_state.config.keystore == [test_keystore_path]
- result.end_state.config.keyAlias == [test_key_alias]
- result.msg == "Realm key jks-test-key would be created"
when: test_keystore_path is defined
- name: Create java-keystore key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-test-key
state: present
parent_id: "{{ realm }}"
provider_id: java-keystore
config:
enabled: true
active: true
priority: 100
algorithm: RS256
keystore: "{{ test_keystore_path }}"
keystore_password: "{{ test_keystore_password }}"
key_alias: "{{ test_key_alias }}"
register: result
when: test_keystore_path is defined
- name: Assert java-keystore key was created
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "jks-test-key"
- result.end_state.providerId == "java-keystore"
- result.end_state.providerType == "org.keycloak.keys.KeyProvider"
- result.end_state.config.algorithm == ["RS256"]
- result.end_state.key_info is defined
- result.end_state.key_info.kid is defined
- result.end_state.key_info.certificate_fingerprint is defined
- result.end_state.key_info.status == "ACTIVE"
- result.msg == "Realm key jks-test-key created"
when: test_keystore_path is defined
- name: Create java-keystore key (test for idempotency)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-test-key
state: present
parent_id: "{{ realm }}"
provider_id: java-keystore
config:
enabled: true
active: true
priority: 100
algorithm: RS256
keystore: "{{ test_keystore_path }}"
keystore_password: "{{ test_keystore_password }}"
key_alias: "{{ test_key_alias }}"
register: result
when: test_keystore_path is defined
- name: Assert java-keystore key is in sync
assert:
that:
- result is not changed
- result.msg == "Realm key jks-test-key was in sync"
when: test_keystore_path is defined
- name: Update java-keystore key priority
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-test-key
state: present
parent_id: "{{ realm }}"
provider_id: java-keystore
config:
enabled: true
active: true
priority: 110
algorithm: RS256
keystore: "{{ test_keystore_path }}"
keystore_password: "{{ test_keystore_password }}"
key_alias: "{{ test_key_alias }}"
register: result
when: test_keystore_path is defined
- name: Assert java-keystore key was updated
assert:
that:
- result is changed
- result.end_state.config.priority == ["110"]
- "'config.priority' in result.msg"
when: test_keystore_path is defined
- name: Remove java-keystore key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-test-key
state: absent
parent_id: "{{ realm }}"
provider_id: java-keystore
config:
priority: 110
register: result
when: test_keystore_path is defined
- name: Assert java-keystore key was deleted
assert:
that:
- result is changed
- result.end_state == {}
- result.msg == "Realm key jks-test-key deleted"
when: test_keystore_path is defined
# ============================================================
# Java Keystore update_password tests
# ============================================================
- name: Create java-keystore key with update_password=always (default)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-update-pw-test
state: present
parent_id: "{{ realm }}"
provider_id: java-keystore
# update_password: always is the default
config:
enabled: true
active: true
priority: 100
algorithm: RS256
keystore: "{{ test_keystore_path }}"
keystore_password: "{{ test_keystore_password }}"
key_alias: "{{ test_key_alias }}"
register: result
when: test_keystore_path is defined
- name: Assert java-keystore key was created
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "jks-update-pw-test"
- result.msg == "Realm key jks-update-pw-test created"
when: test_keystore_path is defined
- name: Re-run with update_password=always (should NOT be idempotent - passwords always sent)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-update-pw-test
state: present
parent_id: "{{ realm }}"
provider_id: java-keystore
update_password: always
config:
enabled: true
active: true
priority: 100
algorithm: RS256
keystore: "{{ test_keystore_path }}"
keystore_password: "{{ test_keystore_password }}"
key_alias: "{{ test_key_alias }}"
register: result
when: test_keystore_path is defined
# Note: With update_password=always, the module always sends passwords to Keycloak.
# Keycloak doesn't report back if passwords changed, so the module reports "in sync"
# for the config comparison (passwords are excluded from comparison).
# The key difference is: always sends real passwords, on_create sends masked values.
- name: Assert java-keystore key is in sync (no config changes detected)
assert:
that:
- result is not changed
- result.msg == "Realm key jks-update-pw-test was in sync"
when: test_keystore_path is defined
- name: Remove java-keystore key to test update_password=on_create
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-update-pw-test
state: absent
parent_id: "{{ realm }}"
provider_id: java-keystore
config:
priority: 100
register: result
when: test_keystore_path is defined
- name: Create java-keystore key with update_password=on_create
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-update-pw-test
state: present
parent_id: "{{ realm }}"
provider_id: java-keystore
update_password: on_create
config:
enabled: true
active: true
priority: 100
algorithm: RS256
keystore: "{{ test_keystore_path }}"
keystore_password: "{{ test_keystore_password }}"
key_alias: "{{ test_key_alias }}"
register: result
when: test_keystore_path is defined
- name: Assert java-keystore key was created with on_create mode
assert:
that:
- result is changed
- result.end_state != {}
- result.end_state.name == "jks-update-pw-test"
- result.msg == "Realm key jks-update-pw-test created"
when: test_keystore_path is defined
- name: Re-run with update_password=on_create (should be idempotent)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-update-pw-test
state: present
parent_id: "{{ realm }}"
provider_id: java-keystore
update_password: on_create
config:
enabled: true
active: true
priority: 100
algorithm: RS256
keystore: "{{ test_keystore_path }}"
keystore_password: "{{ test_keystore_password }}"
key_alias: "{{ test_key_alias }}"
register: result
when: test_keystore_path is defined
- name: Assert java-keystore key is idempotent with on_create mode
assert:
that:
- result is not changed
- result.msg == "Realm key jks-update-pw-test was in sync"
when: test_keystore_path is defined
- name: Update priority with update_password=on_create (passwords preserved)
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-update-pw-test
state: present
parent_id: "{{ realm }}"
provider_id: java-keystore
update_password: on_create
config:
enabled: true
active: true
priority: 110
algorithm: RS256
keystore: "{{ test_keystore_path }}"
keystore_password: "{{ test_keystore_password }}"
key_alias: "{{ test_key_alias }}"
register: result
when: test_keystore_path is defined
- name: Assert priority was updated but passwords preserved
assert:
that:
- result is changed
- result.end_state.config.priority == ["110"]
- "'config.priority' in result.msg"
when: test_keystore_path is defined
- name: Remove java-keystore update_password test key
community.general.keycloak_realm_key:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
name: jks-update-pw-test
state: absent
parent_id: "{{ realm }}"
provider_id: java-keystore
config:
priority: 110
register: result
when: test_keystore_path is defined
- name: Assert java-keystore update_password test key was deleted
assert:
that:
- result is changed
- result.end_state == {}
when: test_keystore_path is defined
- name: Remove Keycloak test realm
community.general.keycloak_realm:
auth_keycloak_url: "{{ url }}"

View File

@@ -112,3 +112,59 @@
that:
- delete_result.changed
- delete_result.end_state | length == 0
- name: Create user with plus-addressed email
community.general.keycloak_user:
auth_keycloak_url: "{{ url }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
auth_realm: "{{ admin_realm }}"
username: "testuser+tag"
realm: "{{ realm }}"
first_name: Plus
last_name: User
email: "testuser+tag@example.org"
state: present
register: plus_create_result
- name: Assert plus-addressed user is created
assert:
that:
- plus_create_result.changed
- plus_create_result.end_state.username == 'testuser+tag'
- plus_create_result.end_state.email == 'testuser+tag@example.org'
- name: Re-run plus-addressed user creation (idempotency)
community.general.keycloak_user:
auth_keycloak_url: "{{ url }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
auth_realm: "{{ admin_realm }}"
username: "testuser+tag"
realm: "{{ realm }}"
first_name: Plus
last_name: User
email: "testuser+tag@example.org"
state: present
register: plus_idempotent_result
- name: Assert plus-addressed user is idempotent
assert:
that:
- plus_idempotent_result is not changed
- name: Delete plus-addressed user
community.general.keycloak_user:
auth_keycloak_url: "{{ url }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
auth_realm: "{{ admin_realm }}"
username: "testuser+tag"
realm: "{{ realm }}"
state: absent
register: plus_delete_result
- name: Assert plus-addressed user is deleted
assert:
that:
- plus_delete_result.changed

View File

@@ -37,8 +37,8 @@
- name: Map a realm role to client service account
vars:
- roles:
- name: '{{ role }}'
roles:
- name: '{{ role }}'
community.general.keycloak_user_rolemapping:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
@@ -58,8 +58,8 @@
- name: Unmap a realm role from client service account
vars:
- roles:
- name: '{{ role }}'
roles:
- name: '{{ role }}'
community.general.keycloak_user_rolemapping:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
@@ -89,6 +89,18 @@
name: "{{ role }}"
state: absent
- name: Create second client (for cross-client role mapping test)
community.general.keycloak_client:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
realm: "{{ realm }}"
client_id: "{{ client_id_2 }}"
service_accounts_enabled: true
state: present
register: client_2
- name: Create new client role
community.general.keycloak_role:
auth_keycloak_url: "{{ url }}"
@@ -101,10 +113,54 @@
description: "{{ description_1 }}"
state: present
- name: Map a client role to a user with no existing roles for that client
vars:
roles:
- name: '{{ role }}'
community.general.keycloak_user_rolemapping:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
realm: "{{ realm }}"
client_id: "{{ client_id }}"
service_account_user_client_id: "{{ client_id_2 }}"
roles: "{{ roles }}"
state: present
register: result
- name: Assert client role is assigned to user with no prior roles
assert:
that:
- result is changed
- result.end_state | selectattr("clientRole", "eq", true) | selectattr("name", "eq", role) | list | count > 0
- name: Unmap the cross-client role mapping
vars:
roles:
- name: '{{ role }}'
community.general.keycloak_user_rolemapping:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
auth_username: "{{ admin_user }}"
auth_password: "{{ admin_password }}"
realm: "{{ realm }}"
client_id: "{{ client_id }}"
service_account_user_client_id: "{{ client_id_2 }}"
roles: "{{ roles }}"
state: absent
register: result
- name: Assert cross-client role mapping is removed
assert:
that:
- result is changed
- result.end_state == []
- name: Map a client role to client service account
vars:
- roles:
- name: '{{ role }}'
roles:
- name: '{{ role }}'
community.general.keycloak_user_rolemapping:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"
@@ -125,8 +181,8 @@
- name: Unmap a client role from client service account
vars:
- roles:
- name: '{{ role }}'
roles:
- name: '{{ role }}'
community.general.keycloak_user_rolemapping:
auth_keycloak_url: "{{ url }}"
auth_realm: "{{ admin_realm }}"

View File

@@ -9,6 +9,7 @@ admin_user: admin
admin_password: password
realm: myrealm
client_id: myclient
client_id_2: myotherclient
role: myrole
description_1: desc 1
description_2: desc 2

View File

@@ -27,4 +27,4 @@
- test_reload
- test_runatload
when: ansible_os_family == 'Darwin'
when: ansible_facts.os_family == 'Darwin'

View File

@@ -13,4 +13,4 @@
- include_tasks: "{{ item }}"
with_fileglob:
- 'tests/*.yml'
when: ansible_os_family in ['Ubuntu', 'Debian']
when: ansible_facts.os_family in ['Ubuntu', 'Debian']

Some files were not shown because too many files have changed in this diff Show More