65 Commits

Author SHA1 Message Date
Zuul
382a43c461 Merge "Add support for setting shard key on baremetal node" 2026-05-12 22:34:14 +00:00
Zuul
bcac650895 Merge "Add volume_image_metadata" 2026-05-11 22:05:26 +00:00
Zuul
c3cd0d2838 Merge "Add port_forwarding modules" 2026-05-11 22:05:24 +00:00
Zuul
143c959f7d Merge "Only clear out security groups if the parameter is set" 2026-05-11 21:11:14 +00:00
Zuul
0e6e281316 Merge "CI: Run jobs for all non-EOL Ansible versions" 2026-05-05 11:21:41 +00:00
Zuul
a02bba1152 Merge "Add ability to allocate floating IP" 2026-05-04 16:43:39 +00:00
Simon Dodsley
80557dae33 Add volume_image_metadata
This module introduces the ability to set image metadata
which is distinct from regualt volume metadata, and is
required for correct boot-from-volume behaviour.

This allows workflows such as replication and disaster
recovery to correctly preserve image provenance and
Nova boot semantics.

Change-Id: I55732fbe8fc6bd579b8542f834033650a076db76
Signed-off-by: Simon Dodsley <simon@purestorage.com>
2026-04-30 10:36:39 +01:00
Zuul
d4a77620cc Merge "fix(trunk): Always add relevant sub_ports" 2026-04-29 18:54:39 +00:00
Michal Nasiadka
b3e1cdd730 CI: Run jobs for all non-EOL Ansible versions
Switch Jammy nodesets to Noble

Change-Id: I7d18196c12d5670a5a458d64d901ffcf3b0b0af4
Signed-off-by: Michal Nasiadka <mnasiadka@gmail.com>
2026-04-29 07:18:52 +00:00
Zuul
9e7332ffff Merge "feat(images): Adds support for image import and 'uploading' status" 2026-04-29 05:11:59 +00:00
Zuul
e545283299 Merge "Add bootable volume support" 2026-04-28 20:21:40 +00:00
Zuul
885fadb31e Merge "Add authorization_ttl option for Keystone IDP." 2026-04-20 06:15:05 +00:00
Zuul
da833ac8dc Merge "Add schema_version property to federation_mapping" 2026-04-16 10:43:34 +00:00
Doug Szumski
b1932e1b06 Add support for setting shard key on baremetal node
Ironic supports setting a shard key on baremetal nodes
which can be used to scale out the Nova Compute Ironic
service. This change adds support for setting the shard
key.

Change-Id: I9694470a8ce6d964d6251bda4463f025bd4245e0
Signed-off-by: Doug Szumski <doug@stackhpc.com>
2026-04-09 15:51:50 +01:00
Nicholas Kuechler
09a4e4248d feat(images): Adds support for image import and 'uploading' status
Change-Id: Ib5023c376f7fe1f9850ac7bdacaf6dea695cce0f
Signed-off-by: Nicholas Kuechler <nicholas.kuechler@rackspace.com>
2026-03-24 11:25:52 -05:00
Zuul
ccec4d07b3 Merge "Add support for managing network segments" 2026-03-23 15:16:45 +00:00
Taavi Ansper
1387aec1db Add schema_version property to federation_mapping
Closes-Bug: #2145020
Change-Id: Id4e19dd4d63ae083280a78863a6cdecbd043ea7c
Signed-off-by: Taavi Ansper <taaviansperr@gmail.com>
2026-03-21 12:45:10 +02:00
Zuul
04b70b99da Merge "Support updating extra_specs in project module" 2026-03-19 21:41:00 +00:00
Zuul
ed4c4036af Merge "feat(images): Adds support for image ID reservation and queued images" 2026-03-19 07:37:34 +00:00
Zuul
e2ebc1c8d0 Merge "tox: Drop basepython" 2026-03-19 07:37:33 +00:00
Zuul
99eb60f7dc Merge "Add baremetal_port_group module" 2026-03-19 06:56:08 +00:00
Nicholas Kuechler
e90fd7a915 feat(images): Adds support for image ID reservation and queued images
Change-Id: I3aa319deb711eaa1ccad4f48eedb079afd801872
Signed-off-by: Nicholas Kuechler <nicholas.kuechler@rackspace.com>
2026-03-12 14:52:48 -05:00
Taavi Ansper
b3a31eb6d5 Add authorization_ttl option for Keystone IDP.
Closes-Bug: #2142395

Change-Id: Ib3fab86da2170cc6a349c06906ad27bf54ed0d5c
Signed-off-by: Taavi Ansper <taaviansperr@gmail.com>
2026-02-22 17:36:44 +02:00
Artem Goncharov
1bc4f648fb Rotate the galaxy secret
Change-Id: I4d25f3b7831c80c615e6dd967a5e5065452cdce8
Signed-off-by: Artem Goncharov <artem.goncharov@gmail.com>
2026-02-17 17:15:54 +01:00
Grzegorz Koper
1a654a9c38 Add baremetal_port_group module
Add support for managing Ironic baremetal port groups.

Include CI role coverage and unit tests for create, update, delete, and check mode behavior.

Add a reno fragment describing the new module.

Tests-Run: python -m pytest tests/unit/modules/cloud/openstack/test_baremetal_port_group.py
Change-Id: I98564fcb5b81a1dd7be1fbf5ffca364483296655
2026-02-10 14:45:35 +01:00
Jan-Philipp Litza
af4d72a3bb fix(trunk): Always add relevant sub_ports
This fixes two similar issues:
1. sub_ports weren't added when initially creating the trunk
2. sub_ports identified by ID instead of name weren't added

Closes-Bug: #2089589
Change-Id: I1342e23aafdd44eaf16f236d6d07ace41ae1d247
Signed-off-by: Jan-Philipp Litza <janphilipp@litza.de>
2026-01-30 14:09:22 +01:00
Ivan Anfimov
67b7ec5e58 tox: Drop basepython
Python 2 reached its EOL long time ago and we no longer expect any
user may attempt to run tox in Python 2.

Removing the option allows us to remove ignore_basepython_conflict.

Change-Id: I54c0581e77c924f9d98975964e4470e89fa3d954
Co-authored-by: Takashi Kajinami <kajinamit@oss.nttdata.com>
Signed-off-by: Ivan Anfimov <lazekteam@gmail.com>
2026-01-24 12:53:27 +00:00
James Hewitt
b1e4d4b714 Only clear out security groups if the parameter is set
Because security groups are associated with ports not servers, this is a tricky UX.

When creating a server, security groups are applied to any new ports created but not existing ports because they haven't been interrogated yet. When updating a server, the security groups for all attached ports are updated. Because we have an empty list as default, this means if you don't specify security groups on the server (because you're setting them on the port instead), then server creation will work fine, but update will clear out the groups from the port.

This patch changes the default groups for the server resource to be None, so that we can tell if the user doesn't specify any groups and if they don't, rely on the ports having them set instead. It improves idempotency.

Closes-Bug: #2137488
Change-Id: Iedcc9a34dc5ac847496eab19880189e4dc3c517a
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2026-01-07 15:12:23 +00:00
James Hewitt
a4ed67b054 Add bootable volume support
Add support for setting volumes to be bootable on creation, as well as support for updating the bootable flag.

Closes-Bug: #2137559
Change-Id: I60bac613060551c4d6144675b1553b4fdda2d13d
Signed-off-by: James Hewitt <james.hewitt@uk.ibm.com>
2026-01-07 11:49:47 +00:00
naosuke
70128d6230 Support updating extra_specs in project module
It supports for adding extra_specs in updating project.

Change-Id: I98a73ed9367d52df82213b3b7c484ceac10acf3d
Signed-off-by: Naoki Hanakawa <naoki.hanakawa@lycorp.co.jp>
2025-12-17 10:13:52 +09:00
Riccardo Pittau
1dc367b566 Use new bifrost CI jobs names
Ironic do not support tinyipa anymore, all jobs run with DIB
based ipa ramdisks, so we updated their config and names.

Change-Id: Id7b260a0965d941d3a34eb181a068a9a5e7189ef
Signed-off-by: Riccardo Pittau <elfosardo@gmail.com>
2025-12-16 09:28:32 +01:00
Austin Jamias
86d9e2e00a Add port_forwarding modules
This adds the ability to manage floating IP port forwarding resources.

Change-Id: Ifd7cb30faf0efbd043474d2d6c23b87a55ee73de
Signed-off-by: Austin Jamias <ajamias@redhat.com>
2025-12-10 12:59:52 -05:00
Austin Jamias
f0e0388159 Add ability to allocate floating IP
Currently, you would only use this module to create a floating IP if it
doesn't exist and attempt to attach it to a Nova server. This commit
adds support for creating a standalone floating IP not attached to
anything, and optionally attaching it to a fixed IP in a fixed network.

Change-Id: Id65ce98674b6b9d93dd4cfbbdf2c5c51798fca38
Signed-off-by: Austin Jamias <ajamias@redhat.com>
2025-12-04 15:15:05 -05:00
Maksim Malchuk
a178493281 Drop duplicate lines in the Changelog
Change-Id: I0475545d8114d8731a2131eed372c7557b579e3f
Signed-off-by: Maksim Malchuk <maksim.malchuk@gmail.com>
2025-11-17 16:19:28 +03:00
Austin Jamias
b2aac80b41 Fix tox.ini linters_2_18 config
Correct a typo where linters_2_18's requirements were not being
referenced correctly.

Change-Id: I1c7a7555c9effd4f0ceb417be709059aa10d0a5e
Signed-off-by: Austin Jamias <ajamias@redhat.com>
2025-11-10 19:45:36 -05:00
Zuul
ae6e48be00 Merge "Fix Ansible errors" 2025-10-31 18:03:37 +00:00
Michal Nasiadka
e06a61f97a Fix Ansible errors
Change-Id: I826ec0b01f8cfdf78235d146c90d790c8e891cc9
Signed-off-by: Michal Nasiadka <mnasiadka@gmail.com>
2025-10-27 07:40:26 +00:00
Artem Goncharov
3c0a9f2d94 Release 2.5.0 version
Change-Id: Ie96930eda27984833f386bbcd65ebf2eeda4e40c
Signed-off-by: Artem Goncharov <artem.goncharov@gmail.com>
2025-10-21 13:09:36 +02:00
Tadas Sutkaitis
b1ecd54a5d feat: introduce share_type modules
Add share_type and share_type_info modules.
Uses direct Manila API calls via the SDK's session/connection interface
since share type resources are not available in openstacksdk.

Change-Id: I49af9a53435e226c5cc93a14190f85ef4637c798
Signed-off-by: Tadas Sutkaitis <tadasas@gmail.com>
2025-10-08 20:51:26 +00:00
Zuul
4a3460e727 Merge "Add import_method to module" 2025-10-07 17:13:52 +00:00
Jay Jahns
c9887b3a23 Add import_method to module
This adds the import method to support web-download option. It
has been added to the openstacksdk.

Closes-Bug: #2115023
Change-Id: I3236cb50b1265e0d7596ada9122aa3b4fc2baf9e
Depends-On: https://review.opendev.org/c/openstack/ansible-collections-openstack/+/955752
2025-08-26 12:04:42 +00:00
Andrew Bonney
e84ebb0773 ci: fix various issues preventing complete runs
When creating a coe cluster, a flavor lookup occurs which
fails as nothing is specified. This adds a default flavor
which can be used to test cluster provisioning.

Various checks used a deprecated truthy check which no
longer works with Ansible 2.19

Inventory cache format change in Ansible 2.19

galaxy.yml required in tmp testing directory or version
checks fail for linters with Ansible 2.19

Change-Id: Iaf3f05d0841a541e4318821fe44ddd59f236b640
2025-07-25 10:15:30 +01:00
Andrew Bonney
eef8368e6f Add support for managing network segments
Adds a module to manage Neutron network segments where the
segmentation plugin is enabled.

Segments are relatively simple and do not support modification
beyond the name/description, so most attributes are used for
initial segment creation, or filtering results in order to
perform updates.

Depends-On: https://review.opendev.org/c/openstack/ansible-collections-openstack/+/955752
Change-Id: I4647fd96aaa15460d82765365f98a18ddf2693db
2025-07-24 09:04:30 +00:00
Zuul
f584c54dfd Merge "Add volume_manage module" 2025-06-05 12:42:20 +00:00
Zuul
3901000119 Merge "feat: add support for filters in inventory" 2025-06-05 12:08:15 +00:00
Simon Dodsley
556208fc3c Add volume_manage module
This module introduces the ability to use the cinder manage
and unmanage of an existing volume on a cinder backend.

Due to API limitations, when unmanaging a volume, only the
volume ID can be provided.

Change-Id: If969f198864e6bd65dbb9fce4923af1674da34bc
2025-05-31 09:46:52 -04:00
Zuul
3ac95541da Merge "Shows missing data in stack_info module output" 2025-05-13 19:23:35 +00:00
Zuul
59b5b33557 Merge "Let clouds_yaml_path behave as documented (Override path to clouds.yaml file)" 2025-05-13 19:23:34 +00:00
Zuul
3d7948a4e4 Merge "Don't compare current state for reboot_* actions" 2025-05-13 18:15:17 +00:00
Paulo Dias
6cb5ed4b84 feat: add support for filters in inventory
Change-Id: Id8428ce1b590b7b2c409623f180f8f8e608e1cda
Signed-off-by: Paulo Dias <paulodias.gm@gmail.com>
2025-05-13 17:21:47 +00:00
matthias-rabe
283c342b10 Let clouds_yaml_path behave as documented (Override path to clouds.yaml file)
Change-Id: I01b38467c4bea5884cbbd04e1e3a2e7c6eeebfeb
2025-04-25 09:55:36 +02:00
Roberto Alfieri
2a5fb584e2 Shows missing data in stack_info module output
The generator `stacks` object didn't return all the values (e.g. output
and parameters) so `get_stack` is used to populate the dict.

Closes-Bug: #2059771
Change-Id: Ie9061e35fc4bf217d76eee96f07e0ed68e44927c
Signed-off-by: Roberto Alfieri <ralfieri@redhat.com>
2025-04-24 22:45:15 +00:00
Zuul
762fee2bad Merge "Allow role_assignment module to work cross domain" 2025-04-24 15:30:33 +00:00
Zuul
9d92897522 Merge "Fix example in the dns_zone_info module doc" 2025-04-24 13:35:53 +00:00
Zuul
df0b57a3c6 Merge "Fix router module external IPs when only subnet specified" 2025-04-24 10:03:02 +00:00
Gavin Chappell
5a6f5084dd Don't compare current state for reboot_* actions
When performing these actions a server will start `ACTIVE` and end
`ACTIVE` meaning that the Ansible module skips over the host you are
trying to reboot because it looks like the action was already taken.

Tests are updated to reflect `changed=True` with the end state
remaining as `ACTIVE`

Closes-Bug: 2046429

Change-Id: I8828f05bb5402fd2ba2c26b67c727abfbcc43202
2025-04-23 18:18:48 +01:00
Roberto Alfieri
c988b3bcbf Fix example in the dns_zone_info module doc
The example in `dns_zone_info` documentation showed an incorrect
`openstack.cloud.dns_zones` module instead of
`openstack.cloud.dns_zone_info`.

Closes-Bug: #2065471
Change-Id: Ic9484034f6631306c031ed640979bec009672ade
Signed-off-by: Roberto Alfieri <me@rebtoor.com>
2025-04-23 16:14:59 +00:00
Doug Goldstein
437438e33c Allow role_assignment module to work cross domain
The role_assignment module always looks up the user, group and project
so to support cross-domain assignments we should add extra parameters
like OSC to look them up from the correct domains. Switch to using
the service proxy interface to grant or revoke the roles as well.

Partial-Bug: #2052448
Partial-Bug: #2047151
Partial-Bug: #2097203
Change-Id: Id023cb9e7017c749bc39bba2091921154a413723
2025-04-23 09:21:43 -05:00
Mohammed Naser
3fd79d342c Fix disable_gateway_ip for subnet
When creating a subnet with disable_gateway_ip, the value
was not passed into the creation which would lead into
a subnet being created with a gateway, and then on updates
it would have the gateway removed.

Change-Id: I816d4a4d09b2116c00cf868a590bd92dac4bfc5b
2025-04-22 18:25:52 +00:00
Andrew Bonney
98bb212ae4 Fix router module external IPs when only subnet specified
The router module used to support adding external fixed IPs
by specifying the subnet only, and the documentation still
allows for this. Unfortunately the logic currently results
in an error message stating that the ip_address cannot be
null.

This patch reinstates this functionality and adds tests to
avoid a future regression.

Change-Id: Ie29c7b2b763a58ea107cc50507e99f650ee9e53f
2025-04-22 16:28:02 +00:00
Doug Goldstein
08d8cd8c25 fix test failures on recordset
Since long time we knew that it is necessary to explicitly invoke
`to_dict` for openstacksdk resources before passing them to Ansible.
Here it was missed and in addition to that records are only returned
when the recordset becomes active.

Change-Id: I49238d2f7add9412bb9100b69f1b84b512f8c34b
2025-04-22 16:30:58 +03:00
Doug Goldstein
41cf92df99 replace storyboard links with bugs.launchpad.net
When you go to the storyboard link at the top it tells users that issues
and bugs are now tracked at bugs.launchpad.net so just update all the
links to point there.

Change-Id: I1dadae24ef4ca6ee2d244cc2a114cca5e4ea5a6b
2025-04-02 09:44:59 -05:00
Callum Dickinson
5494d153b1 Add the object_containers_info module
This adds a module for getting information on one or more
object storage containers from OpenStack.

The following options are supported:

* name - Get details for a single container by name.
  When this parameter is defined a single container is returned,
  with extra metadata available (with the same keys and value
  as in the existing object_container module).
* prefix - Search for and return a list of containers by prefix.
  When searching for containers, only a subset of metadata values
  are available.

When no options are specified, all containers in the project
are returned, with the same metadata available as when the
prefix option is used.

Change-Id: I8ba434a86050f72d8ce85c9e98731f6ef552fc79
2025-01-24 10:43:18 +13:00
Zuul
fef560eb5b Merge "CI: add retries for trait test" 2025-01-20 19:41:02 +00:00
Sagi Shnaidman
239c45c78f CI: add retries for trait test
Change-Id: Ic94f521950da75128a6a677111d4f0206a0e33d6
2025-01-18 13:03:43 +02:00
80 changed files with 4901 additions and 311 deletions

View File

@@ -48,6 +48,7 @@
designate: true
neutron-dns: true
neutron-trunk: true
neutron-segments: true
zuul_copy_output:
'{{ devstack_log_dir }}/test_output.log': 'logs'
extensions_to_txt:
@@ -95,6 +96,39 @@
c-bak: false
tox_extra_args: -vv --skip-missing-interpreters=false -- coe_cluster coe_cluster_template
- job:
name: ansible-collections-openstack-functional-devstack-manila-base
parent: ansible-collections-openstack-functional-devstack-base
# Do not restrict branches in base jobs because else Zuul would not find a matching
# parent job variant during job freeze when child jobs are on other branches.
description: |
Run openstack collections functional tests against a devstack with Manila plugin enabled
# Do not set job.override-checkout or job.required-projects.override-checkout in base job because
# else Zuul will use this branch when matching variants for parent jobs during job freeze
required-projects:
- openstack/manila
- openstack/python-manilaclient
files:
- ^ci/roles/share_type/.*$
- ^plugins/modules/share_type.py
- ^plugins/modules/share_type_info.py
timeout: 10800
vars:
devstack_localrc:
MANILA_ENABLED_BACKENDS: generic
MANILA_OPTGROUP_generic_driver_handles_share_servers: true
MANILA_OPTGROUP_generic_connect_share_server_to_tenant_network: true
MANILA_USE_SERVICE_INSTANCE_PASSWORD: true
devstack_plugins:
manila: https://opendev.org/openstack/manila
devstack_services:
manila: true
m-api: true
m-sch: true
m-shr: true
m-dat: true
tox_extra_args: -vv --skip-missing-interpreters=false -- share_type share_type_info
- job:
name: ansible-collections-openstack-functional-devstack-magnum
parent: ansible-collections-openstack-functional-devstack-magnum-base
@@ -104,6 +138,15 @@
with Magnum plugin enabled, using master of openstacksdk and latest
ansible release. Run it only on coe_cluster{,_template} changes.
- job:
name: ansible-collections-openstack-functional-devstack-manila
parent: ansible-collections-openstack-functional-devstack-manila-base
branches: master
description: |
Run openstack collections functional tests against a master devstack
with Manila plugin enabled, using master of openstacksdk and latest
ansible release. Run it only on share_type{,_info} changes.
- job:
name: ansible-collections-openstack-functional-devstack-octavia-base
parent: ansible-collections-openstack-functional-devstack-base
@@ -169,17 +212,42 @@
branches: master
description: |
Run openstack collections functional tests against a master devstack
using master of openstacksdk and stable 2.16 branch of ansible
using master of openstacksdk and stable 2.18 branch of ansible
required-projects:
- name: github.com/ansible/ansible
override-checkout: stable-2.18
vars:
tox_envlist: ansible_2_18
- job:
name: ansible-collections-openstack-functional-devstack-ansible-2.19
parent: ansible-collections-openstack-functional-devstack-base
branches: master
description: |
Run openstack collections functional tests against a master devstack
using master of openstacksdk and stable 2.19 branch of ansible
required-projects:
- name: github.com/ansible/ansible
override-checkout: stable-2.19
vars:
tox_envlist: ansible_2_19
- job:
name: ansible-collections-openstack-functional-devstack-ansible-2.20
parent: ansible-collections-openstack-functional-devstack-base
branches: master
description: |
Run openstack collections functional tests against a master devstack
using master of openstacksdk and stable 2.20 branch of ansible
required-projects:
- name: github.com/ansible/ansible
override-checkout: stable-2.20
vars:
tox_envlist: ansible_2_20
- job:
name: ansible-collections-openstack-functional-devstack-ansible-devel
parent: ansible-collections-openstack-functional-devstack-base
nodeset: openstack-single-node-jammy
branches: master
description: |
Run openstack collections functional tests against a master devstack
@@ -208,13 +276,13 @@
- job:
name: openstack-tox-linters-ansible-devel
parent: openstack-tox-linters-ansible
nodeset: ubuntu-jammy
nodeset: ubuntu-noble
description: |
Run openstack collections linter tests using the devel branch of ansible
# non-voting because we can't prevent ansible devel from breaking us
voting: false
vars:
python_version: '3.10'
python_version: '3.12'
bindep_profile: test py310
- job:
@@ -230,10 +298,36 @@
python_version: "3.12"
bindep_profile: test py312
- job:
name: openstack-tox-linters-ansible-2.19
parent: openstack-tox-linters-ansible
description: |
Run openstack collections linter tests using the 2.19 branch of ansible
required-projects:
- name: github.com/ansible/ansible
override-checkout: stable-2.19
vars:
tox_envlist: linters_2_19
python_version: "3.12"
bindep_profile: test py312
- job:
name: openstack-tox-linters-ansible-2.20
parent: openstack-tox-linters-ansible
description: |
Run openstack collections linter tests using the 2.20 branch of ansible
required-projects:
- name: github.com/ansible/ansible
override-checkout: stable-2.20
vars:
tox_envlist: linters_2_20
python_version: "3.12"
bindep_profile: test py312
# Cross-checks with other projects
- job:
name: bifrost-collections-src
parent: bifrost-integration-tinyipa-ubuntu-jammy
parent: bifrost-integration-on-ubuntu-noble
required-projects:
- openstack/ansible-collections-openstack
- # always use master branch when collecting parent job variants, refer to git blame for rationale.
@@ -244,7 +338,7 @@
override-checkout: master
- job:
name: bifrost-keystone-collections-src
parent: bifrost-integration-tinyipa-keystone-ubuntu-jammy
parent: bifrost-integration-keystone-on-ubuntu-noble
required-projects:
- openstack/ansible-collections-openstack
- # always use master branch when collecting parent job variants, refer to git blame for rationale.
@@ -266,16 +360,16 @@
data:
url: https://galaxy.ansible.com
token: !encrypted/pkcs1-oaep
- QJ3c5LfmM4YmqwwLKv4wK5lroWDLGeMyPkmHXhvf0ry3vGjKZvZxVpbIhFXJHXevHov/r
nvlqwmG8D5msynQKZDFg2ZwSMIQWRKfSbsSLe7A6NWI2wC+QtZSPiRiBcBcHY1QbNNW21
84cssYa1oHOA0WXpomBz1qXuPV48aKLjMnWysgFhNSx3Oog+ZOSCczyyVVuXP1lIWIO26
AtRTrEcr37K3JY9usE2PCbZKFOq/+IDPz9fbS7PtBOv7iXOHOf3AfBiJiaJe3q/ecoaaq
ejk2WTKWfvq/3rY4pU1976kUcxgcd+jj9ReFyw8edCsc1ecL0qmZFbdHmC03jEcVo4p8I
WJQ0D5wk4/u2Fu9texNuBvb62Yu3Y028Zhm5rz8Zl/ISsdaA3losn5S7C7iAH/yKlGQEI
N/1X4M0tVPaMtsIhZyyz+JMbeNyVR9ZarqbtpzRtVhjxL7KOiAQbEzAmZcBbCJ2Z5iI+P
bTp03f9Y/tZNtkohARvx1TKhv8CvsmyGkMm+r5Y8aWz3SNy8LL6bSwtGun/ifbnadHmw/
TD5/UUXHHjBGkeAu9HTtwUZ5Qdkfg92PnPgruAAuOkF1Y4RyRS9qvwhtqyHO8TwU0INRY
5MHEzeOQWemoQb/qdENp+J/Q9oMEbpFYv9TkrWkxVoKop6Str8e3FF5sxmN/SE=
- K93hOZo1B5z248H04COB1N2HCkGbFPo2EUr+0W7qFzsrdvmbsAI86Hl9bUCfEENGrwvfV
0j9CE5iO0tyqal3r6ucMhGT44MgQWL3MBeRvK89yAJpSNMU7R7rEY/zbjZMoC9YElcHEv
GEDZSA/0gQHCHpZVDlx4JMGwrnd+Nz9ha3c12BYeZS8rS/dQl7EmZ867OsozmNdG9UkkC
0vP/dkenUQNvoZOSWgZztRBlbAyI1nc5iEEw9vvpLh19HcY9+S2iAZkgSq4jOOO4wn7gE
XAZPr0HRdwS2m4Hw0Pusrg7SdC3+2O0N/fvFGnvvKXHcSgQk3rPLn6HfKzOJoPWc4WlDX
MA79jYloNBXjOaeXOoiwYzzshWK53F6Ci+3leq1cYuFyHSi2ds2mYXat7YndZSsmsk5um
hj0+Ddy9Om1uYy3nhHyZLULE7UDUmduA9EPkvdyWlcW0yZL2kXcrDTHlSp4PaJg9iKVys
0aOOo9CNMwhyXAOGiFCYF/m7Efbnp50zUQhHN9+7LeVzXZuiH98C8kNvWfE0qrkrrgQ1n
78UMqGcGpdw4ZSlWrDTbrbd4v0bRnsJ+IAWISnT5OXaeJgGZwXRuBHtTXqbjoosBeX/8w
YKb0lx7E5ZtSw7+Y6LNDGihGTmVg1nkZUWo85CxyF/RiWHuNvpkzzqXmdGS1bg=
- project:
check:
@@ -283,11 +377,16 @@
- tox-pep8
- openstack-tox-linters-ansible-devel
- openstack-tox-linters-ansible-2.18
- openstack-tox-linters-ansible-2.19
- openstack-tox-linters-ansible-2.20
- ansible-collections-openstack-functional-devstack
- ansible-collections-openstack-functional-devstack-releases
- ansible-collections-openstack-functional-devstack-ansible-2.18
- ansible-collections-openstack-functional-devstack-ansible-2.19
- ansible-collections-openstack-functional-devstack-ansible-2.20
- ansible-collections-openstack-functional-devstack-ansible-devel
- ansible-collections-openstack-functional-devstack-magnum
- ansible-collections-openstack-functional-devstack-manila
- ansible-collections-openstack-functional-devstack-octavia
- bifrost-collections-src:
@@ -301,21 +400,29 @@
jobs:
- tox-pep8
- openstack-tox-linters-ansible-2.18
- openstack-tox-linters-ansible-2.19
- openstack-tox-linters-ansible-2.20
- ansible-collections-openstack-functional-devstack-releases
- ansible-collections-openstack-functional-devstack-magnum
- ansible-collections-openstack-functional-devstack-manila
- ansible-collections-openstack-functional-devstack-octavia
periodic:
jobs:
- openstack-tox-linters-ansible-devel
- openstack-tox-linters-ansible-2.18
- openstack-tox-linters-ansible-2.19
- openstack-tox-linters-ansible-2.20
- ansible-collections-openstack-functional-devstack
- ansible-collections-openstack-functional-devstack-releases
- ansible-collections-openstack-functional-devstack-ansible-2.18
- ansible-collections-openstack-functional-devstack-ansible-2.19
- ansible-collections-openstack-functional-devstack-ansible-2.20
- ansible-collections-openstack-functional-devstack-ansible-devel
- bifrost-collections-src
- bifrost-keystone-collections-src
- ansible-collections-openstack-functional-devstack-magnum
- ansible-collections-openstack-functional-devstack-manila
- ansible-collections-openstack-functional-devstack-octavia
tag:

View File

@@ -4,6 +4,34 @@ Ansible OpenStack Collection Release Notes
.. contents:: Topics
v2.5.0
======
Release Summary
---------------
Bugfixes and minor changes
Major Changes
-------------
- Add import_method to module
- Add object_containers_info module
- Add support for filters in inventory
- Add volume_manage module
- Introduce share_type modules
Minor Changes
-------------
- Allow role_assignment module to work cross domain
- Don't compare current state for `reboot_*` actions
- Fix disable_gateway_ip for subnet
- Fix example in the dns_zone_info module doc
- Fix router module external IPs when only subnet specified
- Fix the bug reporting url
- Let clouds_yaml_path behave as documented (Override path to clouds.yaml file)
- Shows missing data in `stack_info` module output
v2.4.1
======

View File

@@ -211,7 +211,7 @@ Thank you for your interest in our Ansible OpenStack collection ☺️
There are many ways in which you can participate in the project, for example:
- [Report and verify bugs and help with solving issues](
https://storyboard.openstack.org/#!/project/openstack/ansible-collections-openstack).
https://bugs.launchpad.net/ansible-collections-openstack).
- [Submit and review patches](
https://review.opendev.org/#/q/project:openstack/ansible-collections-openstack).
- Follow OpenStack's [How To Contribute](https://wiki.openstack.org/wiki/How_To_Contribute) guide.

View File

@@ -616,3 +616,22 @@ releases:
- Update tags when changing server
release_summary: Bugfixes and minor changes
release_date: '2024-01-20'
2.5.0:
changes:
major_changes:
- Add import_method to module
- Add object_containers_info module
- Add support for filters in inventory
- Add volume_manage module
- Introduce share_type modules
minor_changes:
- Allow role_assignment module to work cross domain
- Don't compare current state for `reboot_*` actions
- Fix disable_gateway_ip for subnet
- Fix example in the dns_zone_info module doc
- Fix router module external IPs when only subnet specified
- Fix the bug reporting url
- Let clouds_yaml_path behave as documented (Override path to clouds.yaml file)
- Shows missing data in `stack_info` module output
release_summary: Bugfixes and minor changes
release_date: '2025-10-24'

View File

@@ -0,0 +1,3 @@
---
minor_changes:
- Add support for setting the shard key on a baremetal node.

View File

@@ -0,0 +1,5 @@
---
minor_changes:
- Added the new ``openstack.cloud.baremetal_port_group`` module to manage
Bare Metal port groups (create, update, and delete), including CI role
coverage and unit tests.

View File

@@ -46,6 +46,7 @@ expected_fields:
- reservation
- resource_class
- retired_reason
- shard
- states
- storage_interface
- target_power_state

View File

@@ -46,6 +46,7 @@ expected_fields:
- reservation
- resource_class
- retired_reason
- shard
- states
- storage_interface
- target_power_state

View File

@@ -0,0 +1,12 @@
expected_fields:
- address
- created_at
- extra
- id
- links
- mode
- name
- node_id
- properties
- standalone_ports_supported
- updated_at

View File

@@ -0,0 +1,100 @@
---
# TODO: Actually run this role in CI. Atm we do not have DevStack's ironic plugin enabled.
- name: Create baremetal node
openstack.cloud.baremetal_node:
cloud: "{{ cloud }}"
driver_info:
ipmi_address: "1.2.3.4"
ipmi_username: "admin"
ipmi_password: "secret"
name: ansible_baremetal_node
nics:
- mac: "aa:bb:cc:aa:bb:cc"
state: present
register: node
- name: Create baremetal port group
openstack.cloud.baremetal_port_group:
cloud: "{{ cloud }}"
state: present
name: ansible_baremetal_port_group
node: ansible_baremetal_node
address: fa:16:3e:aa:aa:ab
mode: active-backup
standalone_ports_supported: true
extra:
test: created
properties:
miimon: '100'
register: port_group
- debug: var=port_group
- name: Assert return values of baremetal_port_group module
assert:
that:
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(port_group.port_group.keys())|length == 0
- port_group.port_group.name == "ansible_baremetal_port_group"
- port_group.port_group.node_id == node.node.id
- name: Update baremetal port group
openstack.cloud.baremetal_port_group:
cloud: "{{ cloud }}"
state: present
id: "{{ port_group.port_group.id }}"
mode: 802.3ad
standalone_ports_supported: false
extra:
test: updated
register: updated_port_group
- name: Assert return values of updated baremetal port group
assert:
that:
- updated_port_group is changed
- updated_port_group.port_group.id == port_group.port_group.id
- updated_port_group.port_group.mode == "802.3ad"
- not updated_port_group.port_group.standalone_ports_supported
- updated_port_group.port_group.extra.test == "updated"
- name: Update baremetal port group again
openstack.cloud.baremetal_port_group:
cloud: "{{ cloud }}"
state: present
id: "{{ port_group.port_group.id }}"
mode: 802.3ad
standalone_ports_supported: false
extra:
test: updated
register: updated_port_group
- name: Assert idempotency for baremetal port group module
assert:
that:
- updated_port_group is not changed
- updated_port_group.port_group.id == port_group.port_group.id
- name: Delete baremetal port group
openstack.cloud.baremetal_port_group:
cloud: "{{ cloud }}"
state: absent
id: "{{ port_group.port_group.id }}"
- name: Delete baremetal port group again
openstack.cloud.baremetal_port_group:
cloud: "{{ cloud }}"
state: absent
id: "{{ port_group.port_group.id }}"
register: deleted_port_group
- name: Assert idempotency for deleted baremetal port group
assert:
that:
- deleted_port_group is not changed
- name: Delete baremetal node
openstack.cloud.baremetal_node:
cloud: "{{ cloud }}"
name: ansible_baremetal_node
state: absent

View File

@@ -72,6 +72,8 @@
image_id: '{{ image_id }}'
is_floating_ip_enabled: true
keypair_id: '{{ keypair.keypair.id }}'
flavor_id: 'm1.small'
master_flavor_id: 'm1.small'
name: k8s
state: present
register: coe_cluster_template

View File

@@ -27,6 +27,12 @@
name: ansible_external
external: true
- name: Gather information about external network
openstack.cloud.networks_info:
cloud: "{{ cloud }}"
name: ansible_external
register: external_network
- name: Create external subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"
@@ -98,6 +104,17 @@
- ip_address: 10.7.7.102
register: port3
- name: Create internal port 4
openstack.cloud.port:
cloud: "{{ cloud }}"
state: present
name: ansible_internal_port4
network: ansible_internal
fixed_ips:
- ip_address: 10.7.7.103
- ip_address: 10.7.7.104
register: port4
- name: Create router 1
openstack.cloud.router:
cloud: "{{ cloud }}"
@@ -136,10 +153,31 @@
selectattr('floating_network_id', '==', public_network.networks.0.id)|
list|length > 0 }}"
# TODO: Replace with appropriate Ansible module once available
- name: Create a floating ip on public network (required for simplest, first floating ip test)
command: openstack --os-cloud={{ cloud }} floating ip create public
when: not public_network_had_fips
block:
- name: Create a floating ip on public network
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: present
network: public
register: public_fip_result
- name: Verify floating ip got created
assert:
that:
- public_fip_result.floating_ip.floating_network_id == public_network.networks.0.id
- name: Create a floating ip on public network again
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: present
network: public
register: public_fip_result
- name: Verify idempotency
assert:
that: public_fip_result is not changed
# TODO: Replace with appropriate Ansible module once available
- name: Create floating ip 1 on external network
@@ -151,6 +189,90 @@
when: fips.floating_ips|length == 0 or
"10.6.6.150" not in fips.floating_ips|map(attribute="floating_ip_address")|list
- name: Create floating ip 2 on external network
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: present
network: ansible_external
floating_ip_address: 10.6.6.151
register: external_fip2_result
- name: Verify floating ip got created
assert:
that:
- external_fip2_result.floating_ip.floating_network_id == external_network.networks.0.id
- name: Update floating ip 2 on external network
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: present
network: ansible_external
floating_ip_address: 10.6.6.151
nat_destination: ansible_internal
fixed_address: 10.7.7.104
register: external_fip2_result
- name: Verify floating ip got updated
assert:
that:
- external_fip2_result is changed
- external_fip2_result.floating_ip.floating_ip_address == "10.6.6.151"
- external_fip2_result.floating_ip.port_id == port4.port.id
- external_fip2_result.floating_ip.fixed_ip_address == "10.7.7.104"
- name: Update floating ip 2 on external network again
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: present
network: ansible_external
floating_ip_address: 10.6.6.151
nat_destination: ansible_internal
fixed_address: 10.7.7.104
register: external_fip2_result
- name: Verify idempotency
assert:
that:
- external_fip2_result is not changed
- name: Detatch floating ip 2 on external network from port
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: absent
network: ansible_external
floating_ip_address: 10.6.6.151
- name: Get floating ip 2 info
openstack.cloud.floating_ip_info:
cloud: "{{ cloud }}"
floating_ip_address: 10.6.6.151
register: external_fip2_result
- name: Verify floating ip got detached
assert:
that:
- external_fip2_result.floating_ips.0.floating_ip_address == "10.6.6.151"
- external_fip2_result.floating_ips.0.fixed_ip_address == none
- external_fip2_result.floating_ips.0.port_id == none
- name: Detatch floating ip 2 on external network from port again
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: absent
network: ansible_external
floating_ip_address: 10.6.6.151
- name: Get floating ip 2 info
openstack.cloud.floating_ip_info:
cloud: "{{ cloud }}"
floating_ip_address: 10.6.6.151
register: external_fip2_result
- name: Verify idempotency
assert:
that:
- external_fip2_result is not changed
- name: Create server 1 with one nic
openstack.cloud.server:
cloud: "{{ cloud }}"
@@ -241,7 +363,7 @@
that:
- server1_fips is success
- server1_fips is not changed
- server1_fips.floating_ips
- server1_fips.floating_ips|length > 0
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(server1_fips.floating_ips[0].keys())|length == 0
@@ -260,7 +382,7 @@
- name: Assert return values of floating_ip module
assert:
that:
- floating_ip.floating_ip
- floating_ip.floating_ip|length > 0
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(floating_ip.floating_ip.keys())|length == 0
@@ -312,7 +434,7 @@
- name: Assert floating ip attached to server 2
assert:
that:
- server2_fip.floating_ip
- server2_fip.floating_ip|length > 0
- name: Find all floating ips for debugging
openstack.cloud.floating_ip_info:
@@ -433,18 +555,41 @@
cloud: "{{ cloud }}"
register: fips
# TODO: Replace with appropriate Ansible module once available
- name: Delete floating ip on public network if we created it
when: not public_network_had_fips
command: >
openstack --os-cloud={{ cloud }} floating ip delete
{{ fips.floating_ips|selectattr('floating_network_id', '==', public_network.networks.0.id)|
map(attribute="floating_ip_address")|list|join(' ') }}
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: absent
purge: true
floating_ip_address: "{{ public_fip }}"
network: public
loop: >-
{{
fips.floating_ips |
selectattr('floating_network_id', '==', public_network.networks.0.id) |
map(attribute="floating_ip_address") |
list
}}
loop_control:
loop_var: public_fip
# TODO: Replace with appropriate Ansible module once available
- name: Delete floating ip 1
command: openstack --os-cloud={{ cloud }} floating ip delete 10.6.6.150
when: fips.floating_ips|length > 0 and "10.6.6.150" in fips.floating_ips|map(attribute="floating_ip_address")|list
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: absent
purge: true
floating_ip_address: 10.6.6.150
network: ansible_external
- name: Delete floating ip 2
when: fips.floating_ips|length > 0 and "10.6.6.151" in fips.floating_ips|map(attribute="floating_ip_address")|list
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: absent
purge: true
floating_ip_address: 10.6.6.151
network: ansible_external
- name: Get remaining floating ips on external network
openstack.cloud.floating_ip_info:
@@ -452,14 +597,24 @@
floating_network: ansible_external
register: fips
# TODO: Replace with appropriate Ansible module once available
# The first, simple floating ip test might have allocated a floating ip on the external network.
# This floating ip must be removed before external network can be deleted.
- name: Delete remaining floating ips on external network
when: fips.floating_ips|length > 0
command: >
openstack --os-cloud={{ cloud }} floating ip delete
{{ fips.floating_ips|map(attribute="floating_ip_address")|list|join(' ') }}
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: absent
purge: true
floating_ip_address: "{{ external_fip }}"
network: ansible_external
loop: >-
{{
fips.floating_ips |
map(attribute="floating_ip_address") |
list
}}
loop_control:
loop_var: external_fip
# Remove routers after floating ips have been detached and disassociated else removal fails with
# Error detaching interface from router ***: Client Error for url: ***,
@@ -478,6 +633,12 @@
state: absent
name: ansible_router1
- name: Delete internal port 4
openstack.cloud.port:
cloud: "{{ cloud }}"
state: absent
name: ansible_internal_port4
- name: Delete internal port 3
openstack.cloud.port:
cloud: "{{ cloud }}"

View File

@@ -279,6 +279,11 @@
ansible.builtin.set_fact:
cache: "{{ cache.content | b64decode | from_yaml }}"
- name: Further process Ansible 2.19+ cache
ansible.builtin.set_fact:
cache: "{{ cache.__payload__ | from_yaml }}"
when: cache.__payload__ is defined
- name: Check Ansible's cache
assert:
that:

View File

@@ -38,7 +38,7 @@
- name: Ensure public key is returned
assert:
that:
- keypair.keypair.public_key is defined and keypair.keypair.public_key
- keypair.keypair.public_key is defined and keypair.keypair.public_key|length > 0
- name: Create another keypair
openstack.cloud.keypair:

View File

@@ -11,7 +11,7 @@
- name: Check output of creating network
assert:
that:
- infonet.network
- infonet.network is defined
- item in infonet.network
loop: "{{ expected_fields }}"

View File

@@ -0,0 +1,17 @@
---
expected_fields:
- description
- id
- name
- network_id
- network_type
- physical_network
- segmentation_id
network_name: segment_network
segment_name: example_segment
network_type: vlan
segmentation_id: 999
physical_network: public
initial_description: "example segment description"
updated_description: "updated segment description"

View File

@@ -0,0 +1,72 @@
---
- name: Create network {{ network_name }}
openstack.cloud.network:
cloud: "{{ cloud }}"
name: "{{ network_name }}"
state: present
- name: Create segment {{ segment_name }}
openstack.cloud.network_segment:
cloud: "{{ cloud }}"
name: "{{ segment_name }}"
description: "{{ initial_description }}"
network: "{{ network_name }}"
network_type: "{{ network_type }}"
segmentation_id: "{{ segmentation_id }}"
physical_network: "{{ physical_network }}"
state: present
register: segment
- name: Assert changed
assert:
that: segment is changed
- name: Assert segment fields
assert:
that: item in segment.network_segment
loop: "{{ expected_fields }}"
- name: Update segment {{ segment_name }} by name - no changes
openstack.cloud.network_segment:
cloud: "{{ cloud }}"
name: "{{ segment_name }}"
description: "{{ initial_description }}"
state: present
register: segment
- name: Assert not changed
assert:
that: segment is not changed
- name: Update segment {{ segment_name }} by all fields - changes
openstack.cloud.network_segment:
cloud: "{{ cloud }}"
name: "{{ segment_name }}"
description: "{{ updated_description }}"
network: "{{ network_name }}"
network_type: "{{ network_type }}"
segmentation_id: "{{ segmentation_id }}"
physical_network: "{{ physical_network }}"
state: present
register: segment
- name: Assert changed
assert:
that: segment is changed
- name: Delete segment {{ segment_name }}
openstack.cloud.network_segment:
cloud: "{{ cloud }}"
name: "{{ segment_name }}"
state: absent
register: segment
- name: Assert changed
assert:
that: segment is changed
- name: Delete network {{ network_name }}
openstack.cloud.network:
cloud: "{{ cloud }}"
name: "{{ network_name }}"
state: absent

View File

@@ -0,0 +1,37 @@
---
test_container_unprefixed_name: ansible-test-container
test_container_prefixed_prefix: ansible-prefixed-test-container
test_container_prefixed_num: 2
test_object_data: "Hello, world!"
expected_fields_single:
- bytes
- bytes_used
- content_type
- count
- history_location
- id
- if_none_match
- is_content_type_detected
- is_newest
- meta_temp_url_key
- meta_temp_url_key_2
- name
- object_count
- read_ACL
- storage_policy
- sync_key
- sync_to
- timestamp
- versions_location
- write_ACL
expected_fields_multiple:
- bytes
- bytes_used
- count
- id
- name
- object_count

View File

@@ -0,0 +1,124 @@
---
- name: Generate list of containers to create
ansible.builtin.set_fact:
all_test_containers: >-
{{
[test_container_unprefixed_name]
+ (
[test_container_prefixed_prefix + '-']
| product(range(test_container_prefixed_num) | map('string'))
| map('join', '')
)
}}
- name: Run checks
block:
- name: Create all containers
openstack.cloud.object_container:
cloud: "{{ cloud }}"
name: "{{ item }}"
read_ACL: ".r:*,.rlistings"
loop: "{{ all_test_containers }}"
- name: Create an object in all containers
openstack.cloud.object:
cloud: "{{ cloud }}"
container: "{{ item }}"
name: hello.txt
data: "{{ test_object_data }}"
loop: "{{ all_test_containers }}"
- name: Fetch single containers by name
openstack.cloud.object_containers_info:
cloud: "{{ cloud }}"
name: "{{ item }}"
register: single_containers
loop: "{{ all_test_containers }}"
- name: Check that all fields are returned for single containers
ansible.builtin.assert:
that:
- (item.containers | length) == 1
- item.containers[0].name == item.item
- item.containers[0].bytes == (test_object_data | length)
- item.containers[0].read_ACL == ".r:*,.rlistings"
# allow new fields to be introduced but prevent fields from being removed
- (expected_fields_single | difference(item.containers[0].keys()) | length) == 0
quiet: true
loop: "{{ single_containers.results }}"
loop_control:
label: "{{ item.item }}"
- name: Fetch multiple containers by prefix
openstack.cloud.object_containers_info:
cloud: "{{ cloud }}"
prefix: "{{ test_container_prefixed_prefix }}"
register: multiple_containers
- name: Check that the correct number of prefixed containers were returned
ansible.builtin.assert:
that:
- (multiple_containers.containers | length) == test_container_prefixed_num
fail_msg: >-
Incorrect number of containers found
(found {{ multiple_containers.containers | length }},
expected {{ test_container_prefixed_num }})
quiet: true
- name: Check that all prefixed containers exist
ansible.builtin.assert:
that:
- >-
(test_container_prefixed_prefix + '-' + (item | string))
in (multiple_containers.containers | map(attribute='name'))
fail_msg: "Container not found: {{ test_container_prefixed_prefix + '-' + (item | string) }}"
quiet: true
loop: "{{ range(test_container_prefixed_num) | list }}"
loop_control:
label: "{{ test_container_prefixed_prefix + '-' + (item | string) }}"
- name: Check that the expected fields are returned for all prefixed containers
ansible.builtin.assert:
that:
- item.name.startswith(test_container_prefixed_prefix)
# allow new fields to be introduced but prevent fields from being removed
- (expected_fields_multiple | difference(item.keys()) | length) == 0
quiet: true
loop: "{{ multiple_containers.containers | sort(attribute='name') }}"
loop_control:
label: "{{ item.name }}"
- name: Fetch all containers
openstack.cloud.object_containers_info:
cloud: "{{ cloud }}"
register: all_containers
- name: Check that all expected containers were returned
ansible.builtin.assert:
that:
- item in (all_containers.containers | map(attribute='name'))
fail_msg: "Container not found: {{ item }}"
quiet: true
loop: "{{ all_test_containers }}"
- name: Check that the expected fields are returned for all containers
ansible.builtin.assert:
that:
# allow new fields to be introduced but prevent fields from being removed
- (expected_fields_multiple | difference(item.keys()) | length) == 0
quiet: true
loop: "{{ all_containers.containers | selectattr('name', 'in', all_test_containers) }}"
loop_control:
label: "{{ item.name }}"
always:
- name: Delete all containers
openstack.cloud.object_container:
cloud: "{{ cloud }}"
name: "{{ item }}"
state: absent
delete_with_all_objects: true
loop: "{{ all_test_containers }}"

View File

@@ -0,0 +1,10 @@
expected_fields:
- description
- external_port
- floatingip_id
- id
- internal_ip_address
- internal_port
- internal_port_id
- name
- protocol

View File

@@ -0,0 +1,272 @@
---
- name: Create test network
openstack.cloud.network:
cloud: "{{ cloud }}"
state: present
name: test_internal_network
- name: Create test subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"
state: present
name: test_internal_subnet
network_name: test_internal_network
cidr: 192.168.100.0/24
gateway_ip: 192.168.100.1
- name: Create test port
openstack.cloud.port:
cloud: "{{ cloud }}"
state: present
name: test_internal_port
network: test_internal_network
fixed_ips:
- ip_address: 192.168.100.10
register: test_internal_port
- name: Create test external network
openstack.cloud.network:
cloud: "{{ cloud }}"
state: present
name: test_external_network
external: true
- name: Create test external subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"
state: present
network_name: test_external_network
name: test_external_subnet
cidr: 10.6.6.0/24
- name: Create router
openstack.cloud.router:
cloud: "{{ cloud }}"
state: present
name: test_router
network: test_external_network
external_fixed_ips:
- subnet: test_external_subnet
interfaces:
- test_internal_subnet
- name: Create test floating IP
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: present
network: test_external_network
register: test_floating_ip
- name: Test - Create port forwarding rule
openstack.cloud.port_forwarding:
cloud: "{{ cloud }}"
state: present
floating_ip: "{{ test_floating_ip.floating_ip.id }}"
network_port: test_internal_port
internal_ip: 192.168.100.10
external_protocol_port: 8080
internal_protocol_port: 80
protocol: tcp
register: pf_create
- name: Get port forwarding info
openstack.cloud.port_forwarding_info:
cloud: "{{ cloud }}"
floating_ip: "{{ test_floating_ip.floating_ip.id }}"
port_forwarding_id: "{{ pf_create.port_forwarding.id }}"
register: pf_create_info
- name: Verify - Port forwarding created successfully
assert:
that:
- pf_create is changed
- pf_create.port_forwarding is defined
- pf_create.port_forwarding.external_port == 8080
- pf_create.port_forwarding.internal_port == 80
- pf_create.port_forwarding.protocol == "tcp"
- pf_create_info.port_forwardings | length == 1
- pf_create_info.port_forwardings.0.id == pf_create.port_forwarding.id
- name: Test - Create port forwarding rule again (idempotency)
openstack.cloud.port_forwarding:
cloud: "{{ cloud }}"
state: present
floating_ip: "{{ test_floating_ip.floating_ip.id }}"
network_port: test_internal_port
internal_ip: 192.168.100.10
external_protocol_port: 8080
internal_protocol_port: 80
protocol: tcp
register: pf_idempotent
- name: Verify - No changes
assert:
that:
- pf_idempotent is not changed
- name: Test - Update port forwarding internal port
openstack.cloud.port_forwarding:
cloud: "{{ cloud }}"
state: present
floating_ip: "{{ test_floating_ip.floating_ip.id }}"
network_port: test_internal_port
internal_ip: 192.168.100.10
external_protocol_port: 8080
internal_protocol_port: 8080 # Changed from 80 to 8080
protocol: tcp
register: pf_update
- name: Get port forwarding info
openstack.cloud.port_forwarding_info:
cloud: "{{ cloud }}"
floating_ip: "{{ test_floating_ip.floating_ip.id }}"
port_forwarding_id: "{{ pf_update.port_forwarding.id }}"
register: pf_update_info
- name: Verify - Port forwarding updated successfully
assert:
that:
- pf_update is changed
- pf_update.port_forwarding.internal_port == 8080
- pf_update_info.port_forwardings | length == 1
- pf_update_info.port_forwardings.0.id == pf_update.port_forwarding.id
- name: Test - Update with same values (idempotency)
openstack.cloud.port_forwarding:
cloud: "{{ cloud }}"
state: present
floating_ip: "{{ test_floating_ip.floating_ip.id }}"
network_port: test_internal_port
internal_ip: 192.168.100.10
external_protocol_port: 8080
internal_protocol_port: 8080
protocol: tcp
register: pf_update_idempotent
- name: Verify - No changes
assert:
that:
- pf_update_idempotent is not changed
- name: Test - Change just one attribute
openstack.cloud.port_forwarding:
cloud: "{{ cloud }}"
state: present
port_forwarding_id: "{{ pf_create.port_forwarding.id }}"
floating_ip: "{{ test_floating_ip.floating_ip.id }}"
internal_protocol_port: 9090 # Different internal port
register: pf_update_by_id
- name: Verify - Port forwarding updated by ID
assert:
that:
- pf_update_by_id.changed == true
- pf_update_by_id.port_forwarding.id == pf_create.port_forwarding.id
- pf_update_by_id.port_forwarding.internal_port_id == test_internal_port.port.id
- pf_update_by_id.port_forwarding.internal_ip_address == "192.168.100.10"
- pf_update_by_id.port_forwarding.external_port == 8080
- pf_update_by_id.port_forwarding.internal_port == 9090
- name: Test - Create port forwarding without specifying internal IP
openstack.cloud.port_forwarding:
cloud: "{{ cloud }}"
state: present
floating_ip: "{{ test_floating_ip.floating_ip.id }}"
network_port: test_internal_port
external_protocol_port: 2222
internal_protocol_port: 22
protocol: tcp
register: pf_auto_internal_ip
- name: Verify - Port forwarding created with auto internal IP
assert:
that:
- pf_auto_internal_ip.changed == true
- pf_auto_internal_ip.port_forwarding.internal_ip_address == "192.168.100.10"
- name: Test - Delete port forwarding
openstack.cloud.port_forwarding:
cloud: "{{ cloud }}"
state: absent
floating_ip: "{{ test_floating_ip.floating_ip.id }}"
network_port: test_internal_port
external_protocol_port: 8080
internal_protocol_port: 9090
protocol: tcp
register: pf_delete
- name: Verify - Port forwarding deleted successfully
assert:
that:
- pf_delete.changed == true
- name: Test - Delete port forwarding by ID
openstack.cloud.port_forwarding:
cloud: "{{ cloud }}"
state: absent
floating_ip: "{{ test_floating_ip.floating_ip.id }}"
port_forwarding_id: "{{ pf_auto_internal_ip.port_forwarding.id }}"
register: pf_delete_by_id
- name: Verify - Port forwarding deleted by ID
assert:
that:
- pf_delete_by_id.changed == true
- name: Test - Delete already deleted port forwarding (idempotency)
openstack.cloud.port_forwarding:
cloud: "{{ cloud }}"
state: absent
port_forwarding_id: "{{ pf_auto_internal_ip.port_forwarding.id }}"
floating_ip: "{{ test_floating_ip.floating_ip.id }}"
register: pf_delete_idempotent
- name: Verify - No errors on deleting non-existent rule (idempotency)
assert:
that:
- pf_delete_idempotent is not changed
- pf_delete_idempotent is not failed
- name: Clean up - Delete test floating IP
openstack.cloud.floating_ip:
cloud: "{{ cloud }}"
state: absent
floating_ip_address: "{{ test_floating_ip.floating_ip.floating_ip_address }}"
network: test_external_network
purge: true
- name: Clean up - Delete router
openstack.cloud.router:
cloud: "{{ cloud }}"
state: absent
name: test_router
- name: Clean up - Delete test external subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"
state: absent
name: test_external_subnet
- name: Clean up - Delete test external network
openstack.cloud.network:
cloud: "{{ cloud }}"
state: absent
name: test_external_network
- name: Clean up - Delete test port
openstack.cloud.port:
cloud: "{{ cloud }}"
state: absent
name: test_internal_port
- name: Clean up - Delete test subnet
openstack.cloud.subnet:
cloud: "{{ cloud }}"
state: absent
name: test_internal_subnet
- name: Clean up - Delete test network
openstack.cloud.network:
cloud: "{{ cloud }}"
state: absent
name: test_internal_network

View File

@@ -174,6 +174,38 @@
that:
- project.project.is_enabled == True
- name: Update project to add new extra_specs
openstack.cloud.project:
cloud: "{{ cloud }}"
state: present
name: ansible_project
extra_specs:
is_enabled: True
another_tag: True
register: project
- name: Assert return values of project module
assert:
that:
- project.project.is_enabled == True
- project.project.another_tag == True
- name: Update project to change existing extra_specs
openstack.cloud.project:
cloud: "{{ cloud }}"
state: present
name: ansible_project
extra_specs:
is_enabled: True
another_tag: False
register: project
- name: Assert return values of project module
assert:
that:
- project.project.is_enabled == True
- project.project.another_tag == False
- name: Delete project
openstack.cloud.project:
cloud: "{{ cloud }}"

View File

@@ -14,6 +14,15 @@
email: test@example.net
register: dns_zone
- name: Ensure recordset not present
openstack.cloud.recordset:
cloud: "{{ cloud }}"
zone: "{{ dns_zone.zone.name }}"
name: "{{ recordset_name }}"
recordset_type: "a"
records: "{{ records }}"
state: absent
- name: Create a recordset
openstack.cloud.recordset:
cloud: "{{ cloud }}"
@@ -22,11 +31,13 @@
recordset_type: "a"
records: "{{ records }}"
register: recordset
until: '"PENDING" not in recordset["recordset"].status'
retries: 10
delay: 5
- name: Verify recordset info
assert:
that:
- recordset is changed
- recordset["recordset"].name == recordset_name
- recordset["recordset"].zone_name == dns_zone.zone.name
- recordset["recordset"].records | list | sort == records | list | sort

View File

@@ -45,12 +45,6 @@
state: absent
user: admin
- name: Delete project
openstack.cloud.project:
cloud: "{{ cloud }}"
state: absent
name: ansible_project
- name: Create domain
openstack.cloud.identity_domain:
cloud: "{{ cloud }}"
@@ -78,6 +72,7 @@
state: present
name: ansible_user
domain: default
register: specific_user
- name: Create user in specific domain
openstack.cloud.identity_user:
@@ -138,6 +133,45 @@
that:
- role_assignment is changed
- name: Assign role to user in specific domain on default domain project
openstack.cloud.role_assignment:
cloud: "{{ cloud }}"
role: anotherrole
user: "{{ specific_user.user.id }}"
domain: default
project: ansible_project
register: role_assignment
- name: Assert role assignment
assert:
that:
- role_assignment is changed
- name: Revoke role to user in specific domain
openstack.cloud.role_assignment:
cloud: "{{ cloud }}"
role: anotherrole
user: "{{ specific_user.user.id }}"
domain: default
project: ansible_project
state: absent
register: role_assignment
- name: Assert role assignment revoked
assert:
that:
- role_assignment is changed
- name: Assign role to user in specific domain on default domain project
openstack.cloud.role_assignment:
cloud: "{{ cloud }}"
role: anotherrole
user: ansible_user
user_domain: "{{ specific_user.user.domain_id }}"
project: ansible_project
project_domain: default
register: role_assignment
- name: Delete group in default domain
openstack.cloud.identity_group:
cloud: "{{ cloud }}"
@@ -171,3 +205,10 @@
cloud: "{{ cloud }}"
state: absent
name: ansible_domain
- name: Delete project
openstack.cloud.project:
cloud: "{{ cloud }}"
state: absent
name: ansible_project

View File

@@ -558,6 +558,46 @@
assert:
that: router is not changed
- name: Create router without explicit IP address
openstack.cloud.router:
cloud: "{{ cloud }}"
state: present
name: "{{ router_name }}"
enable_snat: false
interfaces:
- shade_subnet1
network: "{{ external_network_name }}"
external_fixed_ips:
- subnet_id: shade_subnet5
register: router
- name: Assert idempotent module
assert:
that: router is changed
- name: Update router without explicit IP address
openstack.cloud.router:
cloud: "{{ cloud }}"
state: present
name: "{{ router_name }}"
enable_snat: false
interfaces:
- shade_subnet1
network: "{{ external_network_name }}"
external_fixed_ips:
- subnet_id: shade_subnet5
register: router
- name: Assert idempotent module
assert:
that: router is not changed
- name: Delete router
openstack.cloud.router:
cloud: "{{ cloud }}"
state: absent
name: "{{ router_name }}"
- name: Create router with simple interface
openstack.cloud.router:
cloud: "{{ cloud }}"

View File

@@ -553,7 +553,7 @@
assert:
that:
- servers.servers.0.status == 'ACTIVE'
- server is not changed
- server is changed
- name: Reboot server (HARD)
openstack.cloud.server_action:
@@ -573,7 +573,7 @@
assert:
that:
- servers.servers.0.status == 'ACTIVE'
- server is not changed
- server is changed
- name: Delete server
openstack.cloud.server:

View File

@@ -0,0 +1,5 @@
---
share_backend_name: GENERIC_BACKEND
share_type_name: test_share_type
share_type_description: Test share type for CI
share_type_alt_description: Changed test share type

View File

@@ -0,0 +1,130 @@
---
- name: Create share type
openstack.cloud.share_type:
name: "{{ share_type_name }}"
cloud: "{{ cloud }}"
state: present
extra_specs:
share_backend_name: "{{ share_backend_name }}"
snapshot_support: true
create_share_from_snapshot_support: true
description: "{{ share_type_description }}"
register: the_result
- name: Check created share type
vars:
the_share_type: "{{ the_result.share_type }}"
ansible.builtin.assert:
that:
- "'id' in the_result.share_type"
- the_share_type.description == share_type_description
- the_share_type.is_public == True
- the_share_type.name == share_type_name
- the_share_type.extra_specs['share_backend_name'] == share_backend_name
- the_share_type.extra_specs['snapshot_support'] == "True"
- the_share_type.extra_specs['create_share_from_snapshot_support'] == "True"
success_msg: >-
Created share type: {{ the_result.share_type.id }},
Name: {{ the_result.share_type.name }},
Description: {{ the_result.share_type.description }}
- name: Test share type info module
openstack.cloud.share_type_info:
name: "{{ share_type_name }}"
cloud: "{{ cloud }}"
register: info_result
- name: Check share type info result
ansible.builtin.assert:
that:
- info_result.share_type.id == the_result.share_type.id
- info_result.share_type.name == share_type_name
- info_result.share_type.description == share_type_description
success_msg: "Share type info retrieved successfully"
- name: Test, check idempotency
openstack.cloud.share_type:
name: "{{ share_type_name }}"
cloud: "{{ cloud }}"
state: present
extra_specs:
share_backend_name: "{{ share_backend_name }}"
snapshot_support: true
create_share_from_snapshot_support: true
description: "{{ share_type_description }}"
is_public: true
register: the_result
- name: Check result.changed is false
ansible.builtin.assert:
that:
- the_result.changed == false
success_msg: "Request with the same details lead to no changes"
- name: Add extra spec
openstack.cloud.share_type:
cloud: "{{ cloud }}"
name: "{{ share_type_name }}"
state: present
extra_specs:
share_backend_name: "{{ share_backend_name }}"
snapshot_support: true
create_share_from_snapshot_support: true
some_spec: fake_spec
description: "{{ share_type_alt_description }}"
is_public: true
register: the_result
- name: Check share type extra spec
ansible.builtin.assert:
that:
- "'some_spec' in the_result.share_type.extra_specs"
- the_result.share_type.extra_specs["some_spec"] == "fake_spec"
- the_result.share_type.description == share_type_alt_description
success_msg: >-
New extra specs: {{ the_result.share_type.extra_specs }}
- name: Remove extra spec by updating with reduced set
openstack.cloud.share_type:
cloud: "{{ cloud }}"
name: "{{ share_type_name }}"
state: present
extra_specs:
share_backend_name: "{{ share_backend_name }}"
snapshot_support: true
create_share_from_snapshot_support: true
description: "{{ share_type_alt_description }}"
is_public: true
register: the_result
- name: Check extra spec was removed
ansible.builtin.assert:
that:
- "'some_spec' not in the_result.share_type.extra_specs"
success_msg: "Extra spec was successfully removed"
- name: Delete share type
openstack.cloud.share_type:
cloud: "{{ cloud }}"
name: "{{ share_type_name }}"
state: absent
register: the_result
- name: Check deletion was successful
ansible.builtin.assert:
that:
- the_result.changed == true
success_msg: "Share type deleted successfully"
- name: Test deletion idempotency
openstack.cloud.share_type:
cloud: "{{ cloud }}"
name: "{{ share_type_name }}"
state: absent
register: the_result
- name: Check deletion idempotency
ansible.builtin.assert:
that:
- the_result.changed == false
success_msg: "Deletion idempotency works correctly"

View File

@@ -25,3 +25,4 @@ expected_fields:
- updated_at
- use_default_subnet_pool
subnet_name: shade_subnet
segment_name: example_segment

View File

@@ -17,10 +17,20 @@
name: "{{ network_name }}"
state: present
- name: Create network segment {{ segment_name }}
openstack.cloud.network_segment:
cloud: "{{ cloud }}"
name: "{{ segment_name }}"
network: "{{ network_name }}"
network_type: "vxlan"
segmentation_id: 1000
state: present
- name: Create subnet {{ subnet_name }} on network {{ network_name }}
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
network_segment: "{{ segment_name }}"
name: "{{ subnet_name }}"
state: present
enable_dhcp: "{{ enable_subnet_dhcp }}"
@@ -142,6 +152,48 @@
assert:
that: subnet is not changed
- name: Create subnet {{ subnet_name }} on network {{ network_name }} without gateway IP
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
name: "{{ subnet_name }}"
state: present
cidr: 192.168.0.0/24
disable_gateway_ip: true
register: subnet
- name: Assert changed
assert:
that: subnet is changed
- name: Create subnet {{ subnet_name }} on network {{ network_name }} without gateway IP
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
name: "{{ subnet_name }}"
state: present
cidr: 192.168.0.0/24
disable_gateway_ip: true
register: subnet
- name: Assert not changed
assert:
that: subnet is not changed
- name: Delete subnet {{ subnet_name }} again
openstack.cloud.subnet:
cloud: "{{ cloud }}"
name: "{{ subnet_name }}"
state: absent
register: subnet
- name: Delete network segment {{ segment_name }}
openstack.cloud.network_segment:
cloud: "{{ cloud }}"
name: "{{ segment_name }}"
network: "{{ network_name }}"
state: absent
- name: Delete network {{ network_name }}
openstack.cloud.network:
cloud: "{{ cloud }}"

View File

@@ -119,22 +119,23 @@
name: "{{ subnet_name }}"
register: subnet_result
- name: Verify Subnet Allocation Pools Exist
assert:
that:
- idem2 is not changed
- subnet_result.subnets is defined
- subnet_result.subnets | length == 1
- subnet_result.subnets[0].allocation_pools is defined
- subnet_result.subnets[0].allocation_pools | length == 2
# TODO(sshnaidm): Uncomment this section when the issue with allocation_pools is fixed
# - name: Verify Subnet Allocation Pools Exist
# assert:
# that:
# - idem2 is not changed
# - subnet_result.subnets is defined
# - subnet_result.subnets | length == 1
# - subnet_result.subnets[0].allocation_pools is defined
# - subnet_result.subnets[0].allocation_pools | length == 2
- name: Verify Subnet Allocation Pools
assert:
that:
- (subnet_result.subnets[0].allocation_pools.0.start == '192.168.0.2' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.0.8') or
(subnet_result.subnets[0].allocation_pools.0.start == '192.168.0.10' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.0.16')
- (subnet_result.subnets[0].allocation_pools.1.start == '192.168.0.2' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.0.8') or
(subnet_result.subnets[0].allocation_pools.1.start == '192.168.0.10' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.0.16')
# - name: Verify Subnet Allocation Pools
# assert:
# that:
# - (subnet_result.subnets[0].allocation_pools.0.start == '192.168.0.2' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.0.8') or
# (subnet_result.subnets[0].allocation_pools.0.start == '192.168.0.10' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.0.16')
# - (subnet_result.subnets[0].allocation_pools.1.start == '192.168.0.2' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.0.8') or
# (subnet_result.subnets[0].allocation_pools.1.start == '192.168.0.10' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.0.16')
- name: Delete subnet {{ subnet_name }}
openstack.cloud.subnet:

View File

@@ -125,22 +125,23 @@
name: "{{ subnet_name }}"
register: subnet_result
- name: Verify Subnet Allocation Pools Exist
assert:
that:
- idem2 is not changed
- subnet_result.subnets is defined
- subnet_result.subnets | length == 1
- subnet_result.subnets[0].allocation_pools is defined
- subnet_result.subnets[0].allocation_pools | length == 2
- name: Verify Subnet Allocation Pools
assert:
that:
- (subnet_result.subnets[0].allocation_pools.0.start == '192.168.42.2' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.42.4') or
(subnet_result.subnets[0].allocation_pools.0.start == '192.168.42.6' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.42.8')
- (subnet_result.subnets[0].allocation_pools.1.start == '192.168.42.2' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.42.4') or
(subnet_result.subnets[0].allocation_pools.1.start == '192.168.42.6' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.42.8')
# NOT(gtema) Temporarily disable the check to land other gate fix
#- name: Verify Subnet Allocation Pools Exist
# assert:
# that:
# - idem2 is not changed
# - subnet_result.subnets is defined
# - subnet_result.subnets | length == 1
# - subnet_result.subnets[0].allocation_pools is defined
# - subnet_result.subnets[0].allocation_pools | length == 2
#
#- name: Verify Subnet Allocation Pools
# assert:
# that:
# - (subnet_result.subnets[0].allocation_pools.0.start == '192.168.42.2' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.42.4') or
# (subnet_result.subnets[0].allocation_pools.0.start == '192.168.42.6' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.42.8')
# - (subnet_result.subnets[0].allocation_pools.1.start == '192.168.42.2' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.42.4') or
# (subnet_result.subnets[0].allocation_pools.1.start == '192.168.42.6' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.42.8')
- name: Delete subnet {{ subnet_name }}
openstack.cloud.subnet:

View File

@@ -1,23 +1,28 @@
---
- openstack.cloud.trait:
- name: Create trait
openstack.cloud.trait:
cloud: "{{ cloud }}"
state: present
id: "{{ trait_name }}"
delegate_to: localhost
register: item
until: result is success
retries: 5
delay: 20
register: result
- assert:
- name: Assert trait
assert:
that:
- "'name' in item.trait"
- "item.trait.id == trait_name"
- "'name' in result.trait"
- "result.trait.id == trait_name"
- openstack.cloud.trait:
- name: Remove trait
openstack.cloud.trait:
cloud: "{{ cloud }}"
state: absent
id: "{{ trait_name }}"
delegate_to: localhost
register: item
register: result1
- assert:
- name: Assert trait removed
assert:
that:
- "'trait' not in item"
- "'trait' not in result1"

View File

@@ -53,7 +53,7 @@
- ip_address: 10.5.6.55
register: subport
- name: Create trunk
- name: Create trunk without subports
openstack.cloud.trunk:
cloud: "{{ cloud }}"
state: present
@@ -61,15 +61,17 @@
port: "{{ parent_port_name }}"
register: trunk
- debug: var=trunk
- name: Display return values of trunk module
ansible.builtin.debug:
var: trunk
- name: assert return values of trunk module
assert:
- name: Assert return values of trunk module
ansible.builtin.assert:
that:
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(trunk.trunk.keys())|length == 0
- name: Add subport to trunk
- name: Add subport to trunk by name
openstack.cloud.trunk:
cloud: "{{ cloud }}"
state: present
@@ -79,14 +81,66 @@
- port: "{{ subport_name }}"
segmentation_type: vlan
segmentation_id: 123
register: trunk_subport_by_name
- name: Update subport from trunk
- name: Assert the subport is part of the trunk
ansible.builtin.assert:
that:
- trunk_subport_by_name.trunk.sub_ports|length == 1
- name: Remove subport from trunk
openstack.cloud.trunk:
cloud: "{{ cloud }}"
state: present
name: "{{ trunk_name }}"
port: "{{ parent_port_name }}"
sub_ports: []
register: trunk_subport_removed
- name: Assert no subports are part of the trunk
ansible.builtin.assert:
that:
- trunk_subport_removed.trunk.sub_ports|length == 0
- name: Add subport to trunk by ID
openstack.cloud.trunk:
cloud: "{{ cloud }}"
state: present
name: "{{ trunk_name }}"
port: "{{ parent_port_name }}"
sub_ports:
- port: "{{ subport.port.id }}"
segmentation_type: vlan
segmentation_id: 123
register: trunk_subport_by_id
- name: Assert the subport is part of the trunk
ansible.builtin.assert:
that:
- trunk_subport_by_id.trunk.sub_ports|length == 1
- name: Delete trunk
openstack.cloud.trunk:
cloud: "{{ cloud }}"
state: absent
name: "{{ trunk_name }}"
- name: Create trunk without subports
openstack.cloud.trunk:
cloud: "{{ cloud }}"
state: present
name: "{{ trunk_name }}"
port: "{{ parent_port_name }}"
sub_ports:
- port: "{{ subport.port.id }}"
segmentation_type: vlan
segmentation_id: 123
register: trunk_with_subports
- name: Assert the subport is part of the trunk
ansible.builtin.assert:
that:
- trunk_with_subports.trunk.sub_ports|length == 1
- name: Delete trunk
openstack.cloud.trunk:

View File

@@ -12,14 +12,35 @@
that: item in vol.volume
loop: "{{ expected_fields }}"
- name: Create volume from existing volume
- assert:
that: not vol.volume.is_bootable
- name: Create bootable volume from existing volume
openstack.cloud.volume:
cloud: "{{ cloud }}"
state: present
size: 1
volume: "{{ vol.volume.id }}"
name: ansible_volume1
is_bootable: true
description: Test volume
register: vol
- assert:
that: vol.volume.is_bootable
- name: Make the first volume bootable
openstack.cloud.volume:
cloud: "{{ cloud }}"
state: present
size: 1
name: ansible_volume
is_bootable: true
description: Test volume
register: vol
- assert:
that: vol.volume.is_bootable
- name: Delete volume
openstack.cloud.volume:

View File

@@ -0,0 +1,7 @@
---
volume_image_metadata_cloud: "{{ cloud | default(omit) }}"
volume_image_metadata_volume_name: test-image-metadata-volume
volume_image_metadata_size: 1
volume_image_metadata:
disk_format: qcow2
container_format: bare

View File

@@ -0,0 +1,103 @@
---
- name: Get available images
openstack.cloud.image_info:
cloud: "{{ volume_image_metadata_cloud }}"
register: image_info
- name: Select test image
set_fact:
volume_image_metadata_image_id: >-
{{
image_info.images
| selectattr('status', 'equalto', 'active')
| list
| first
| default({})
}}
- name: Assert an image is available for testing
assert:
that:
- volume_image_metadata_image_id.id is defined
fail_msg: "No active images available in the cloud for volume_image_metadata CI test"
- name: Create a test volume from image
openstack.cloud.volume:
cloud: "{{ volume_image_metadata_cloud }}"
state: present
name: "{{ volume_image_metadata_volume_name }}"
image: "{{ volume_image_metadata_image_id.id }}"
size: "{{ volume_image_metadata_size }}"
register: created_volume
- name: Assert volume was created
assert:
that:
- created_volume.volume is defined
- created_volume.volume.id is defined
- name: Get volume details
openstack.cloud.volume_info:
cloud: "{{ volume_image_metadata_cloud }}"
name: "{{ volume_image_metadata_volume_name }}"
register: volume_info
- name: Assert volume has image metadata
assert:
that:
- volume_info.volumes[0].volume_image_metadata is defined
- volume_info.volumes[0].volume_image_metadata | length > 0
# --------------------------------------------------------------------
# Exercise new module
# --------------------------------------------------------------------
- name: Set volume image metadata
openstack.cloud.volume_image_metadata:
cloud: "{{ volume_image_metadata_cloud }}"
volume: "{{ created_volume.volume.id }}"
image_metadata: "{{ volume_image_metadata }}"
register: image_meta_result
- name: Assert image metadata changed
assert:
that:
- image_meta_result.changed | bool
# --------------------------------------------------------------------
# Idempotency check
# --------------------------------------------------------------------
- name: Set volume image metadata again (idempotent)
openstack.cloud.volume_image_metadata:
cloud: "{{ volume_image_metadata_cloud }}"
volume: "{{ created_volume.volume.id }}"
image_metadata: "{{ volume_image_metadata }}"
register: image_meta_idempotent
- name: Assert idempotent behavior
assert:
that:
- not image_meta_idempotent.changed | bool
# --------------------------------------------------------------------
# Verify metadata persisted
# --------------------------------------------------------------------
- name: Re-fetch volume details
openstack.cloud.volume_info:
cloud: "{{ volume_image_metadata_cloud }}"
name: "{{ volume_image_metadata_volume_name }}"
register: final_volume_info
- name: Verify image metadata values
assert:
that:
- final_volume_info.volumes[0].volume_image_metadata.disk_format == "qcow2"
- final_volume_info.volumes[0].volume_image_metadata.container_format == "bare"
# --------------------------------------------------------------------
# Cleanup
# --------------------------------------------------------------------
- name: Delete test volume
openstack.cloud.volume:
cloud: "{{ volume_image_metadata_cloud }}"
state: absent
name: "{{ volume_image_metadata_volume_name }}"

View File

@@ -0,0 +1,32 @@
test_volume: ansible_test_volume
managed_volume: managed_test_volume
expected_fields:
- attachments
- availability_zone
- consistency_group_id
- created_at
- updated_at
- description
- extended_replication_status
- group_id
- host
- image_id
- is_bootable
- is_encrypted
- is_multiattach
- migration_id
- migration_status
- project_id
- replication_driver_data
- replication_status
- scheduler_hints
- size
- snapshot_id
- source_volume_id
- status
- user_id
- volume_image_metadata
- volume_type
- id
- name
- metadata

View File

@@ -0,0 +1,65 @@
---
- name: Create volume
openstack.cloud.volume:
cloud: "{{ cloud }}"
state: present
size: 1
name: "{{ test_volume }}"
description: Test volume
register: vol
- assert:
that: item in vol.volume
loop: "{{ expected_fields }}"
- name: Unmanage volume
openstack.cloud.volume_manage:
cloud: "{{ cloud }}"
state: absent
name: "{{ vol.volume.id }}"
- name: Unmanage volume again
openstack.cloud.volume_manage:
cloud: "{{ cloud }}"
state: absent
name: "{{ vol.volume.id }}"
register: unmanage_idempotency
- assert:
that:
- unmanage_idempotency is not changed
- name: Manage volume
openstack.cloud.volume_manage:
cloud: "{{ cloud }}"
state: present
source_name: volume-{{ vol.volume.id }}
host: "{{ vol.volume.host }}"
name: "{{ managed_volume }}"
register: new_vol
- assert:
that:
- new_vol.volume.name == managed_volume
- name: Manage volume again
openstack.cloud.volume_manage:
cloud: "{{ cloud }}"
state: present
source_name: volume-{{ vol.volume.id }}
host: "{{ vol.volume.host }}"
name: "{{ managed_volume }}"
register: vol_idempotency
- assert:
that:
- vol_idempotency is not changed
- pause:
seconds: 10
- name: Delete volume
openstack.cloud.volume:
cloud: "{{ cloud }}"
state: absent
name: "{{ managed_volume }}"

View File

@@ -124,6 +124,11 @@ if [ ! -e /etc/magnum ]; then
tag_opt+=" --skip-tags coe_cluster,coe_cluster_template"
fi
if ! systemctl is-enabled devstack@m-api.service 2>&1; then
# Skip share_type tasks if Manila is not available
tag_opt+=" --skip-tags share_type"
fi
cd ci/
# Run tests

View File

@@ -32,10 +32,13 @@
- { role: loadbalancer, tags: loadbalancer }
- { role: logging, tags: logging }
- { role: network, tags: network }
- { role: network_segment, tags: network_segment }
- { role: neutron_rbac_policy, tags: neutron_rbac_policy }
- { role: object, tags: object }
- { role: object_container, tags: object_container }
- { role: object_containers_info, tags: object_containers_info }
- { role: port, tags: port }
- { role: port_forwarding, tags: port_forwarding }
- { role: trait, tags: trait }
- { role: trunk, tags: trunk }
- { role: project, tags: project }
@@ -52,12 +55,15 @@
- { role: server_group, tags: server_group }
- { role: server_metadata, tags: server_metadata }
- { role: server_volume, tags: server_volume }
- { role: share_type, tags: share_type }
- { role: stack, tags: stack }
- { role: subnet, tags: subnet }
- { role: subnet_pool, tags: subnet_pool }
- { role: volume, tags: volume }
- { role: volume_type, tags: volume_type }
- { role: volume_backup, tags: volume_backup }
- { role: volume_manage, tags: volume_manage }
- { role: volume_service, tags: volume_service }
- { role: volume_snapshot, tags: volume_snapshot }
- { role: volume_type_access, tags: volume_type_access }
- { role: volume_image_metadata, tags: volume_image_metadata }

View File

@@ -11,7 +11,7 @@ For hacking on the Ansible OpenStack collection it helps to [prepare a DevStack
## Hosting
* [Bug tracker][storyboard]
* [Bug tracker][bugtracker]
* [Mailing list `openstack-discuss@lists.openstack.org`][openstack-discuss].
Prefix subjects with `[aoc]` or `[aco]` for faster responses.
* [Code Hosting][opendev-a-c-o]
@@ -188,4 +188,4 @@ Read [Release Guide](releasing.md) on how to publish new releases.
[openstacksdk-cloud-layer-stays]: https://meetings.opendev.org/irclogs/%23openstack-sdks/%23openstack-sdks.2022-04-27.log.html
[openstacksdk-to-dict]: https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/resource.py
[openstacksdk]: https://opendev.org/openstack/openstacksdk
[storyboard]: https://storyboard.openstack.org/#!/project/openstack/ansible-collections-openstack
[bugtracker]: https://bugs.launchpad.net/ansible-collections-openstack

View File

@@ -11,7 +11,7 @@ dependencies: {}
repository: https://opendev.org/openstack/ansible-collections-openstack
documentation: https://docs.ansible.com/ansible/latest/collections/openstack/cloud/index.html
homepage: https://opendev.org/openstack/ansible-collections-openstack
issues: https://storyboard.openstack.org/#!/project/openstack/ansible-collections-openstack
issues: https://bugs.launchpad.net/ansible-collections-openstack
build_ignore:
- "*.tar.gz"
- build_artifact
@@ -32,4 +32,4 @@ build_ignore:
- .vscode
- ansible_collections_openstack.egg-info
- changelogs
version: 2.4.1
version: 2.5.0

View File

@@ -11,7 +11,7 @@ dependencies: {}
repository: https://opendev.org/openstack/ansible-collections-openstack
documentation: https://docs.ansible.com/ansible/latest/collections/openstack/cloud/index.html
homepage: https://opendev.org/openstack/ansible-collections-openstack
issues: https://storyboard.openstack.org/#!/project/openstack/ansible-collections-openstack
issues: https://bugs.launchpad.net/ansible-collections-openstack
build_ignore:
- "*.tar.gz"
- build_artifact

View File

@@ -10,6 +10,7 @@ action_groups:
- baremetal_node_action
- baremetal_node_info
- baremetal_port
- baremetal_port_group
- baremetal_port_info
- catalog_service
- catalog_service_info
@@ -51,12 +52,16 @@ action_groups:
- lb_pool
- loadbalancer
- network
- network_segment
- networks_info
- neutron_rbac_policies_info
- neutron_rbac_policy
- object
- object_container
- object_containers_info
- port
- port_forwarding
- port_forwarding_info
- port_info
- project
- project_info
@@ -77,6 +82,8 @@ action_groups:
- server_info
- server_metadata
- server_volume
- share_type
- share_type_info
- stack
- stack_info
- subnet
@@ -84,6 +91,7 @@ action_groups:
- subnets_info
- trunk
- volume
- volume_manage
- volume_backup
- volume_backup_info
- volume_info
@@ -91,3 +99,4 @@ action_groups:
- volume_snapshot
- volume_snapshot_info
- volume_type_access
- volume_image_metadata

View File

@@ -102,6 +102,12 @@ options:
- Using I(only_ipv4) helps when running Ansible in a ipv4 only setup.
type: bool
default: false
server_filters:
description:
- A dictionary of server filter value pairs.
- Available parameters can be seen under https://docs.openstack.org/api-ref/compute/#list-servers
type: dict
default: {}
show_all:
description:
- Whether all servers should be listed or not.
@@ -279,7 +285,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
clouds_yaml_path = self.get_option('clouds_yaml_path')
config_files = openstack.config.loader.CONFIG_FILES
if clouds_yaml_path:
config_files += clouds_yaml_path
config_files = clouds_yaml_path + config_files
config = openstack.config.loader.OpenStackConfig(
config_files=config_files)
@@ -309,6 +315,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
expand_hostvars = self.get_option('expand_hostvars')
all_projects = self.get_option('all_projects')
server_filters = self.get_option('server_filters')
servers = []
def _expand_server(server, cloud, volumes):
@@ -355,7 +362,8 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
all_projects=all_projects,
# details are required because 'addresses'
# attribute must be populated
details=True)
details=True,
**server_filters)
]:
servers.append(server)
except openstack.exceptions.OpenStackCloudException as e:

View File

@@ -32,16 +32,15 @@
import abc
import copy
from ansible.module_utils.six import raise_from
try:
from ansible.module_utils.compat.version import StrictVersion
except ImportError:
try:
from distutils.version import StrictVersion
except ImportError as exc:
raise_from(ImportError('To use this plugin or module with ansible-core'
' < 2.11, you need to use Python < 3.12 with '
'distutils.version present'), exc)
raise ImportError(f'To use this plugin or module with ansible-core'
f' < 2.11, you need to use Python < 3.12 with '
f'distutils.version present. {exc}')
import importlib
import os

View File

@@ -243,6 +243,10 @@ node:
retired_reason:
description: TODO
type: str
shard:
description: The shard key for a node.
returned: success
type: str
states:
description: |
Links to the collection of states. Note that this resource is also

View File

@@ -437,6 +437,10 @@ node:
description: The reason the node is marked as retired.
returned: success
type: str
shard:
description: The shard key for a node.
returned: success
type: str
states:
description: Links to the collection of states.
returned: success

View File

@@ -289,6 +289,10 @@ nodes:
description: The reason the node is marked as retired.
returned: success
type: str
shard:
description: The shard key for a node.
returned: success
type: str
states:
description: Links to the collection of states.
returned: success

View File

@@ -0,0 +1,257 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2026 OpenStack Ansible SIG
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r'''
module: baremetal_port_group
short_description: Create/Delete Bare Metal port group resources from OpenStack
author: OpenStack Ansible SIG
description:
- Create, update and remove Bare Metal port groups from OpenStack.
options:
id:
description:
- ID of the port group.
- Will be auto-generated if not specified.
type: str
aliases: ['uuid']
name:
description:
- Name of the port group.
type: str
node:
description:
- ID or Name of the node this resource belongs to.
- Required when creating a new port group.
type: str
address:
description:
- Physical hardware address of this port group, typically the hardware
MAC address.
type: str
extra:
description:
- A set of one or more arbitrary metadata key and value pairs.
type: dict
standalone_ports_supported:
description:
- Whether the port group supports ports that are not members of this
port group.
type: bool
mode:
description:
- The port group mode.
type: str
properties:
description:
- Key/value properties for the port group.
type: dict
state:
description:
- Indicates desired state of the resource.
choices: ['present', 'absent']
default: present
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
'''
EXAMPLES = r'''
- name: Create Bare Metal port group
openstack.cloud.baremetal_port_group:
cloud: devstack
state: present
name: bond0
node: bm-0
address: fa:16:3e:aa:aa:aa
mode: '802.3ad'
standalone_ports_supported: true
register: result
- name: Update Bare Metal port group
openstack.cloud.baremetal_port_group:
cloud: devstack
state: present
id: 1a85ebca-22bf-42eb-ad9e-f640789b8098
mode: 'active-backup'
properties:
miimon: '100'
register: result
- name: Delete Bare Metal port group
openstack.cloud.baremetal_port_group:
cloud: devstack
state: absent
id: 1a85ebca-22bf-42eb-ad9e-f640789b8098
register: result
'''
RETURN = r'''
port_group:
description: A port group dictionary, subset of the dictionary keys listed
below may be returned, depending on your cloud provider.
returned: success
type: dict
contains:
address:
description: Physical hardware address of the port group.
returned: success
type: str
created_at:
description: Bare Metal port group created at timestamp.
returned: success
type: str
extra:
description: A set of one or more arbitrary metadata key and value
pairs.
returned: success
type: dict
id:
description: The UUID for the Bare Metal port group resource.
returned: success
type: str
links:
description: A list of relative links, including the self and
bookmark links.
returned: success
type: list
mode:
description: The port group mode.
returned: success
type: str
name:
description: Bare Metal port group name.
returned: success
type: str
node_id:
description: UUID of the Bare Metal node this resource belongs to.
returned: success
type: str
properties:
description: Key/value properties for this port group.
returned: success
type: dict
standalone_ports_supported:
description: Whether standalone ports are supported.
returned: success
type: bool
updated_at:
description: Bare Metal port group updated at timestamp.
returned: success
type: str
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule
)
class BaremetalPortGroupModule(OpenStackModule):
argument_spec = dict(
id=dict(aliases=['uuid']),
name=dict(),
node=dict(),
address=dict(),
extra=dict(type='dict'),
standalone_ports_supported=dict(type='bool'),
mode=dict(),
properties=dict(type='dict'),
state=dict(default='present', choices=['present', 'absent']),
)
module_kwargs = dict(
required_one_of=[
('id', 'name'),
],
supports_check_mode=True,
)
def _find_port_group(self):
id_or_name = self.params['id'] if self.params['id'] else self.params['name']
if not id_or_name:
return None
try:
return self.conn.baremetal.find_port_group(id_or_name)
except self.sdk.exceptions.ResourceNotFound:
return None
def _build_create_attrs(self):
attrs = {}
for key in ['id', 'name', 'address', 'extra',
'standalone_ports_supported', 'mode', 'properties']:
if self.params[key] is not None:
attrs[key] = self.params[key]
node_name_or_id = self.params['node']
if not node_name_or_id:
self.fail_json(msg="Parameter 'node' is required when creating a new port group")
node = self.conn.baremetal.find_node(node_name_or_id, ignore_missing=False)
attrs['node_id'] = node['id']
return attrs
def _build_update_attrs(self, port_group):
attrs = {}
for key in ['name', 'address', 'extra',
'standalone_ports_supported', 'mode', 'properties']:
if self.params[key] is not None and self.params[key] != port_group.get(key):
attrs[key] = self.params[key]
return attrs
def _will_change(self, port_group, state):
if state == 'absent':
return bool(port_group)
if not port_group:
return True
return bool(self._build_update_attrs(port_group))
def run(self):
state = self.params['state']
port_group = self._find_port_group()
if self.ansible.check_mode:
if state == 'present' and not port_group:
self._build_create_attrs()
self.exit_json(changed=self._will_change(port_group, state))
if state == 'present':
if not port_group:
port_group = self.conn.baremetal.create_port_group(
**self._build_create_attrs())
self.exit_json(
changed=True,
port_group=port_group.to_dict(computed=False))
update_attrs = self._build_update_attrs(port_group)
changed = bool(update_attrs)
if changed:
port_group = self.conn.baremetal.update_port_group(
port_group['id'], **update_attrs)
self.exit_json(
changed=changed,
port_group=port_group.to_dict(computed=False))
if not port_group:
self.exit_json(changed=False)
self.conn.baremetal.delete_port_group(port_group['id'])
self.exit_json(changed=True)
def main():
module = BaremetalPortGroupModule()
module()
if __name__ == "__main__":
main()

View File

@@ -41,11 +41,11 @@ extends_documentation_fragment:
EXAMPLES = r'''
- name: Fetch all DNS zones
openstack.cloud.dns_zones:
openstack.cloud.dns_zone_info:
cloud: devstack
- name: Fetch DNS zones by name
openstack.cloud.dns_zones:
openstack.cloud.dns_zone_info:
cloud: devstack
name: ansible.test.zone.
'''

View File

@@ -12,6 +12,11 @@ description:
- Create, update or delete an identity provider of the OpenStack
identity (Keystone) service.
options:
authorization_ttl:
description:
- Time to keep the role assignments for users authenticating via this identity provider.
- When not provided, global default configured in the Identity service will be used.
type: int
description:
description:
- The description of the identity provider.
@@ -58,6 +63,7 @@ EXAMPLES = r'''
name: example_provider
domain_id: 0123456789abcdef0123456789abcdef
description: 'My example IDP'
authorization_ttl: 300
remote_ids:
- 'https://auth.example.com/auth/realms/ExampleRealm'
@@ -74,6 +80,10 @@ identity_provider:
returned: On success when I(state) is C(present).
type: dict
contains:
authorization_ttl:
description: Time to keep the role assignments for users authenticating via this identity provider.
type: int
sample: 300
description:
description: Identity provider description
type: str
@@ -104,6 +114,7 @@ from ansible_collections.openstack.cloud.plugins.module_utils.resource import St
class IdentityProviderModule(OpenStackModule):
argument_spec = dict(
authorization_ttl=dict(type='int'),
description=dict(),
domain_id=dict(),
id=dict(required=True, aliases=['name']),
@@ -127,7 +138,7 @@ class IdentityProviderModule(OpenStackModule):
kwargs['attributes'] = \
dict((k, self.params[k])
for k in ['description', 'domain_id', 'id', 'is_enabled',
for k in ['authorization_ttl', 'description', 'domain_id', 'id', 'is_enabled',
'remote_ids']
if self.params[k] is not None)

View File

@@ -40,6 +40,13 @@ options:
required: true
type: list
elements: dict
schema_version:
description:
- The federated attribute mapping schema version.
The default value on the client side is 'None';
however, that will lead the backend to set the default according
to 'attribute_mapping_default_schema_version' option.
type: str
state:
description:
- Whether the mapping should be C(present) or C(absent).
@@ -69,6 +76,7 @@ EXAMPLES = r'''
any_one_of:
- Contractor
- SubContractor
schema_version: '1.0'
- name: Delete a mapping
openstack.cloud.federation_mapping:
@@ -93,6 +101,9 @@ mapping:
rules:
description: List of rules for the mapping
type: list
schema_version:
description: Schema version of the mapping
type: str
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import OpenStackModule
@@ -108,6 +119,7 @@ class IdentityFederationMappingModule(OpenStackModule):
local=dict(required=True, type='list', elements='dict'),
remote=dict(required=True, type='list', elements='dict')
)),
schema_version=dict(default=None),
state=dict(default='present', choices=['absent', 'present']),
)
@@ -155,7 +167,7 @@ class IdentityFederationMappingModule(OpenStackModule):
if len(self.params['rules']) < 1:
self.fail_json(msg='At least one rule must be passed')
attributes = dict((k, self.params[k]) for k in ['rules']
attributes = dict((k, self.params[k]) for k in ['rules', 'schema_version']
if k in self.params and self.params[k] is not None
and self.params[k] != mapping[k])
@@ -166,7 +178,8 @@ class IdentityFederationMappingModule(OpenStackModule):
def _create(self):
return self.conn.identity.create_mapping(id=self.params['name'],
rules=self.params['rules'])
rules=self.params['rules'],
schema_version=self.params['schema_version'])
def _delete(self, mapping):
self.conn.identity.delete_mapping(mapping.id)

View File

@@ -17,8 +17,9 @@ description:
options:
fixed_address:
description:
- To which fixed IP of server the floating IP address should be
- To which fixed IP of attached port the floating IP address should be
attached to.
aliases: ["fixed_ip_address"]
type: str
floating_ip_address:
description:
@@ -35,6 +36,7 @@ options:
network:
description:
- The name or ID of a neutron external network or a nova pool name.
- When I(server) is not defined, I(network) is required
type: str
purge:
description:
@@ -57,7 +59,6 @@ options:
description:
- The name or ID of the server to which the IP address
should be assigned.
required: true
type: str
state:
description:
@@ -183,23 +184,24 @@ from ansible_collections.openstack.cloud.plugins.module_utils.openstack import O
class NetworkingFloatingIPModule(OpenStackModule):
argument_spec = dict(
fixed_address=dict(),
fixed_address=dict(aliases=['fixed_ip_address']),
floating_ip_address=dict(),
nat_destination=dict(aliases=['fixed_network', 'internal_network']),
network=dict(),
purge=dict(type='bool', default=False),
reuse=dict(type='bool', default=False),
server=dict(required=True),
server=dict(),
state=dict(default='present', choices=['absent', 'present']),
)
module_kwargs = dict(
required_if=[
['state', 'absent', ['floating_ip_address']]
['state', 'present', ['server', 'network'], True],
['state', 'absent', ['floating_ip_address'], False],
],
required_by={
'floating_ip_address': ('network'),
}
},
)
def run(self):
@@ -214,64 +216,36 @@ class NetworkingFloatingIPModule(OpenStackModule):
changed = False
fixed_address = self.params['fixed_address']
floating_ip_address = self.params['floating_ip_address']
nat_destination_name_or_id = self.params['nat_destination']
nat_destination_id = (
self.nat_destination['id'] if self.nat_destination else None
)
network_id = self.network['id'] if self.network else None
server = self.server
ips = self._find_ips(
server=self.server,
server=server,
floating_ip_address=floating_ip_address,
network_id=network_id,
fixed_address=fixed_address,
nat_destination_name_or_id=nat_destination_name_or_id)
nat_destination_id=nat_destination_id
)
# First floating ip satisfies our requirements
ip = ips[0] if ips else None
ip = None
if not ips:
if server:
if floating_ip_address:
# A specific floating ip address has been requested
if not ip:
# If a specific floating ip address has been requested
# and it does not exist yet then create it
# openstacksdk's create_ip requires floating_ip_address
# and floating_network_id to be set
self.conn.network.create_ip(
floating_ip_address=floating_ip_address,
floating_network_id=network_id)
changed = True
else: # ip
# Requested floating ip address exists already
if ip.port_details and (ip.port_details['status'] == 'ACTIVE') \
and (floating_ip_address not in self._filter_ips(
self.server)):
# Floating ip address exists and has been attached
# but to a different server
# Requested ip has been attached to different server
self.fail_json(
msg="Floating ip {0} has been attached to different "
"server".format(floating_ip_address))
if not ip \
or floating_ip_address not in self._filter_ips(self.server):
# Requested floating ip address does not exist or has not been
# assigned to server
# Requested floating ip address does not exist
self.conn.add_ip_list(
server=self.server,
server=server,
ips=[floating_ip_address],
wait=self.params['wait'],
timeout=self.params['timeout'],
fixed_address=fixed_address)
fixed_address=fixed_address
)
changed = True
else:
# Requested floating ip address has been assigned to server
pass
elif not ips: # and not floating_ip_address
else:
# No specific floating ip has been requested and none of the
# floating ips which have been assigned to the server matches
# requirements
@@ -319,34 +293,97 @@ class NetworkingFloatingIPModule(OpenStackModule):
# a6b0ece2821ea79330c4067100295f6bdcbe456e/openstack/cloud/
# _floating_ip.py#L987
self.conn.add_ips_to_server(
server=self.server,
server=server,
ip_pool=network_id,
ips=None, # No specific floating ip requested
reuse=self.params['reuse'],
fixed_address=fixed_address,
wait=self.params['wait'],
timeout=self.params['timeout'],
nat_destination=nat_destination_name_or_id)
nat_destination=nat_destination_id
)
changed = True
else: # not server
kwargs = self._params_to_kwargs(
floating_ip_address,
network_id,
fixed_address,
self.nat_destination
)
# create the ip
ip = self.conn.network.create_ip(**kwargs)
changed = True
else: # ips
ip = ips[0]
if server:
server_ips = self._filter_ips(server)
if ip.floating_ip_address not in server_ips:
port_details = ip.port_details
if (port_details
and port_details['status'] == 'ACTIVE'):
# Requested ip has been attached to different server
self.fail_json(
msg="Floating ip {0} has been attached to "
"different server".format(
floating_ip_address))
else:
# Found one or more floating ips which satisfy requirements
# Requested floating ip address has not been
# assigned to server
self.conn.add_ip_list(
server=server,
ips=[ip.floating_ip_address],
wait=self.params['wait'],
timeout=self.params['timeout'],
fixed_address=fixed_address
)
changed = True
else:
# floating ip is already assigned to the server
pass
if changed:
elif len(ips) > 1: # not server
self.fail_json(msg='Found more than one floating ip')
else:
kwargs = self._params_to_kwargs(
floating_ip_address,
network_id,
fixed_address,
self.nat_destination
)
for key, value in kwargs.items():
if ip[key] != value:
self.conn.network.update_ip(ip, **kwargs)
changed = True
break
if changed and server:
# update server details such as addresses
self.server = self.conn.compute.get_server(self.server)
server = self.conn.compute.get_server(server)
# Update the floating ip resource
ips = self._find_ips(
self.server, floating_ip_address, network_id,
fixed_address, nat_destination_name_or_id)
server,
floating_ip_address,
network_id,
fixed_address,
nat_destination_id
)
# ips can be empty, e.g. when server has no private ipv4
# address to which a floating ip address can be attached
ip = ips[0] if ips else None
self.exit_json(
changed=changed,
floating_ip=ips[0].to_dict(computed=False) if ips else None)
floating_ip=ip.to_dict(computed=False)
)
def _detach_and_delete(self):
ips = self._find_ips(
@@ -354,7 +391,7 @@ class NetworkingFloatingIPModule(OpenStackModule):
floating_ip_address=self.params['floating_ip_address'],
network_id=self.network['id'] if self.network else None,
fixed_address=self.params['fixed_address'],
nat_destination_name_or_id=self.params['nat_destination'])
nat_destination_id=self.nat_destination['id'] if self.nat_destination else None)
if not ips:
# Nothing to detach
@@ -362,19 +399,22 @@ class NetworkingFloatingIPModule(OpenStackModule):
changed = False
for ip in ips:
if self.server:
if ip['fixed_ip_address']:
# Silently ignore that ip might not be attached to server
#
# self.conn.network.update_ip(ip_id, port_id=None) does not
# handle nova network but self.conn.detach_ip_from_server()
# does so
self.conn.detach_ip_from_server(server_id=self.server['id'],
changed = self.conn.detach_ip_from_server(server_id=self.server['id'],
floating_ip_id=ip['id'])
# OpenStackSDK sets {"port_id": None} to detach a floating
# ip from a device, but there might be a delay until a
# server does not list it in addresses any more.
else: # not self.server
if ip['port_id']:
changed = True
self.conn.network.update_ip(floating_ip=ip['id'], port_id=None)
if self.params['purge']:
self.conn.network.delete_ip(ip['id'])
@@ -397,39 +437,56 @@ class NetworkingFloatingIPModule(OpenStackModule):
# Returns a list not an iterator here because
# it is iterated several times below
return [address['addr']
for address in _flatten(server['addresses'].values())
if address['OS-EXT-IPS:type'] == 'floating']
addresses = _flatten(server['addresses'].values())
return [
address['addr']
for address in addresses
if address['OS-EXT-IPS:type'] == 'floating'
]
def _find_ips(self,
server,
floating_ip_address,
network_id,
fixed_address,
nat_destination_name_or_id):
nat_destination_id):
# Check which floating ips matches our requirements.
# They might or might not be attached to our server.
if floating_ip_address:
# A specific floating ip address has been requested
ip = self.conn.network.find_ip(floating_ip_address)
return [ip] if ip else []
elif (not fixed_address and nat_destination_name_or_id):
elif server:
if (not fixed_address and nat_destination_id):
# No specific floating ip and no specific fixed ip have been
# requested but a private network (nat_destination) has been
# given where the floating ip should be attached to.
return self._find_ips_by_nat_destination(
server, nat_destination_name_or_id)
server, nat_destination_id)
else:
# not floating_ip_address
# and (fixed_address or not nat_destination_name_or_id)
# and (fixed_address or not nat_destination_id)
# An analysis of all floating ips of server is required
return self._find_ips_by_network_id_and_fixed_address(
server, fixed_address, network_id)
elif fixed_address or nat_destination_id:
ports = self._find_ports_by_fixed_address_or_nat_destination(fixed_address, nat_destination_id)
floating_ips = []
for port in ports:
ips = list(self.conn.network.ips(port_id=port.id))
floating_ips.extend(ips)
return floating_ips
elif network_id:
return list(self.conn.network.ips(floating_network_id=network_id))
else:
return []
def _find_ips_by_nat_destination(self,
server,
nat_destination_name_or_id):
nat_destination_id):
if not server['addresses']:
return None
@@ -437,7 +494,7 @@ class NetworkingFloatingIPModule(OpenStackModule):
# Check if we have any floating ip on
# the given nat_destination network
nat_destination = self.conn.network.find_network(
nat_destination_name_or_id, ignore_missing=False)
nat_destination_id, ignore_missing=False)
fips_with_nat_destination = [
addr for addr
@@ -467,7 +524,7 @@ class NetworkingFloatingIPModule(OpenStackModule):
# match network of floating ip
continue
if not fixed_address: # and not nat_destination_name_or_id
if not fixed_address: # and not nat_destination_id
# Any floating ip will fullfil these requirements
matching_ips.append(ip)
@@ -478,20 +535,84 @@ class NetworkingFloatingIPModule(OpenStackModule):
return matching_ips
def _params_to_kwargs(self,
floating_ip_address,
network_id,
fixed_address,
nat_destination):
kwargs = {}
kwargs['floating_network_id'] = network_id
if fixed_address:
# must indicate internal port identifier
ports = self._find_ports_by_fixed_address_or_nat_destination(
fixed_address, nat_destination
)
if len(ports) > 1:
self.fail_json(
msg='There are multiple subnets with the fixed ip '
'address {0}'.format(fixed_address)
)
elif len(ports) == 0:
self.fail_json(
msg='No port found with fixed ip address {0}'.format(
fixed_address)
)
else:
kwargs['fixed_ip_address'] = fixed_address
kwargs['port_id'] = ports[0].id
if floating_ip_address:
kwargs['floating_ip_address'] = floating_ip_address
return kwargs
def _find_ports_by_fixed_address_or_nat_destination(self,
fixed_address,
nat_destination):
port_kwargs = {}
if fixed_address:
port_kwargs['fixed_ips'] = f'ip_address={fixed_address}'
if nat_destination:
port_kwargs['network_id'] = nat_destination.id
ports = self.conn.network.ports(**port_kwargs)
return list(ports)
def _init(self):
server_name_or_id = self.params['server']
server = self.conn.compute.find_server(server_name_or_id,
ignore_missing=False)
# fetch server details such as addresses
self.server = self.conn.compute.get_server(server)
if server_name_or_id:
self.server = self.conn.compute.find_server(
name_or_id=server_name_or_id, ignore_missing=False
)
else:
self.server = None
if (self.server is None and self.params['fixed_address']
and self.params['nat_destination'] is None):
self.fail_json(
msg='fixed_address requires nat_destination to be defined '
'when server isn\'t'
)
network_name_or_id = self.params['network']
if network_name_or_id:
self.network = self.conn.network.find_network(
name_or_id=network_name_or_id, ignore_missing=False)
name_or_id=network_name_or_id, ignore_missing=False
)
else:
self.network = None
nat_destination_name_or_id = self.params['nat_destination']
if nat_destination_name_or_id:
self.nat_destination = self.conn.network.find_network(
name_or_id=nat_destination_name_or_id, ignore_missing=False
)
else:
self.nat_destination = None
def main():
module = NetworkingFloatingIPModule()

View File

@@ -128,6 +128,20 @@ options:
- Should only be used when needed, such as when the user needs the cloud to
transform image format.
type: bool
import_method:
description:
- Method to use for importing the image. Not all deployments support all methods.
- Supports web-download or glance-download.
- copy-image is not supported with create actions.
- glance-direct is removed from the import method so use_import can be used in that case.
type: str
choices: [web-download, glance-download]
uri:
description:
- Required only if using the web-download import method.
- This url is where the data is made available to the Image service.
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
'''
@@ -399,11 +413,13 @@ class ImageModule(OpenStackModule):
visibility=dict(choices=['public', 'private', 'shared', 'community']),
volume=dict(),
use_import=dict(type='bool'),
import_method=dict(choices=['web-download', 'glance-download']),
uri=dict()
)
module_kwargs = dict(
mutually_exclusive=[
('filename', 'volume'),
('filename', 'volume', 'uri'),
('visibility', 'is_public'),
],
)
@@ -412,7 +428,7 @@ class ImageModule(OpenStackModule):
attr_params = ('id', 'name', 'filename', 'disk_format',
'container_format', 'wait', 'timeout', 'is_public',
'is_protected', 'min_disk', 'min_ram', 'volume', 'tags',
'use_import')
'use_import', 'import_method', 'uri')
def _resolve_visibility(self):
"""resolve a visibility value to be compatible with older versions"""
@@ -469,6 +485,28 @@ class ImageModule(OpenStackModule):
return update_payload
def _wait_for_image_active(self, image):
if not self.params['wait']:
return image
return self.sdk.resource.wait_for_status(
self.conn.image,
image,
status='active',
failures=['error', 'deleted', 'killed'],
wait=self.params['timeout'],
attribute='status')
def _import_uploaded_image(self, image):
if not hasattr(self.conn.image, 'import_image'):
self.fail_json(
msg="The installed openstacksdk library does not support "
"image import operations required for images in the "
"'uploading' state.")
self.conn.image.import_image(image, method='glance-direct')
return self._wait_for_image_active(self.conn.get_image(image.id))
def run(self):
changed = False
image_name_or_id = self.params['id'] or self.params['name']
@@ -513,6 +551,29 @@ class ImageModule(OpenStackModule):
if image['status'] == 'deactivated':
self.conn.image.reactivate_image(image)
changed = True
elif image['status'] == 'queued':
if (
self.params['filename']
and hasattr(self.conn.image, 'stage_image')):
self.conn.image.stage_image(
image, filename=self.params['filename'])
changed = True
elif self.params['filename']:
with open(self.params['filename'], 'rb') as image_data:
self.conn.image.upload_image(
container_format=self.params['container_format'],
disk_format=self.params['disk_format'],
data=image_data,
id=image.id,
name=image.name)
changed = True
image = self.conn.get_image(image.id)
if image['status'] == 'uploading' and self.params['use_import']:
image = self._import_uploaded_image(image)
changed = True
elif image['status'] == 'importing':
image = self._wait_for_image_active(image)
update_payload = self._build_update(image)

View File

@@ -0,0 +1,183 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2025 British Broadcasting Corporation
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = '''
---
module: network_segment
short_description: Creates/removes network segments from OpenStack
author: OpenStack Ansible SIG
description:
- Add, update or remove network segments from OpenStack.
options:
name:
description:
- Name to be assigned to the segment. Although Neutron allows for
non-unique segment names, this module enforces segment name
uniqueness.
required: true
type: str
description:
description:
- Description of the segment
type: str
network:
description:
- Name or id of the network to which the segment should be attached
type: str
network_type:
description:
- The type of physical network that maps to this segment resource.
type: str
physical_network:
description:
- The physical network where this segment object is implemented.
type: str
segmentation_id:
description:
- An isolated segment on the physical network. The I(network_type)
attribute defines the segmentation model. For example, if the
I(network_type) value is vlan, this ID is a vlan identifier. If
the I(network_type) value is gre, this ID is a gre key.
type: int
state:
description:
- Indicate desired state of the resource.
choices: ['present', 'absent']
default: present
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
'''
EXAMPLES = '''
# Create a VLAN type network segment named 'segment1'.
- openstack.cloud.network_segment:
cloud: mycloud
name: segment1
network: my_network
network_type: vlan
segmentation_id: 2000
physical_network: my_physnet
state: present
'''
RETURN = '''
id:
description: Id of segment
returned: On success when segment exists.
type: str
network_segment:
description: Dictionary describing the network segment.
returned: On success when network segment exists.
type: dict
contains:
description:
description: Description
type: str
id:
description: Id
type: str
name:
description: Name
type: str
network_id:
description: Network Id
type: str
network_type:
description: Network type
type: str
physical_network:
description: Physical network
type: str
segmentation_id:
description: Segmentation Id
type: int
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import OpenStackModule
class NetworkSegmentModule(OpenStackModule):
argument_spec = dict(
name=dict(required=True),
description=dict(),
network=dict(),
network_type=dict(),
physical_network=dict(),
segmentation_id=dict(type='int'),
state=dict(default='present', choices=['absent', 'present'])
)
def run(self):
state = self.params['state']
name = self.params['name']
network_name_or_id = self.params['network']
kwargs = {}
filters = {}
for arg in ('description', 'network_type', 'physical_network', 'segmentation_id'):
if self.params[arg] is not None:
kwargs[arg] = self.params[arg]
for arg in ('network_type', 'physical_network'):
if self.params[arg] is not None:
filters[arg] = self.params[arg]
if network_name_or_id:
network = self.conn.network.find_network(network_name_or_id,
ignore_missing=False,
**filters)
kwargs['network_id'] = network.id
filters['network_id'] = network.id
segment = self.conn.network.find_segment(name, **filters)
if state == 'present':
if not segment:
segment = self.conn.network.create_segment(name=name, **kwargs)
changed = True
else:
changed = False
update_kwargs = {}
# As the name is required and all other attributes cannot be
# changed (and appear in filters above), we only need to handle
# updates to the description here.
for arg in ["description"]:
if (
arg in kwargs
# ensure user wants something specific
and kwargs[arg] is not None
# and this is not what we have right now
and kwargs[arg] != segment[arg]
):
update_kwargs[arg] = kwargs[arg]
if update_kwargs:
segment = self.conn.network.update_segment(
segment.id, **update_kwargs
)
changed = True
segment = segment.to_dict(computed=False)
self.exit(changed=changed, network_segment=segment, id=segment['id'])
elif state == 'absent':
if not segment:
self.exit(changed=False)
else:
self.conn.network.delete_segment(segment['id'])
self.exit(changed=True)
def main():
module = NetworkSegmentModule()
module()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,202 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2024 Catalyst Cloud Limited
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r"""
---
module: object_containers_info
short_description: Fetch container info from the OpenStack Swift service.
author: OpenStack Ansible SIG
description:
- Fetch container info from the OpenStack Swift service.
options:
name:
description:
- Name of the container
type: str
aliases: ["container"]
prefix:
description:
- Filter containers by prefix
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
"""
EXAMPLES = r"""
- name: List all containers existing on the project
openstack.cloud.object_containers_info:
- name: Retrive a single container by name
openstack.cloud.object_containers_info:
name: test-container
- name: Retrieve and filter containers by prefix
openstack.cloud.object_containers_info:
prefix: test-
"""
RETURN = r"""
containers:
description: List of dictionaries describing matching containers.
returned: always
type: list
elements: dict
contains:
bytes:
description: The total number of bytes that are stored in Object Storage
for the container.
type: int
sample: 5449
bytes_used:
description: The count of bytes used in total.
type: int
sample: 5449
content_type:
description: The MIME type of the list of names.
Only fetched when searching for a container by name.
type: str
sample: null
count:
description: The number of objects in the container.
type: int
sample: 1
history_location:
description: Enables versioning on the container.
Only fetched when searching for a container by name.
type: str
sample: null
id:
description: The ID of the container. Equals I(name).
type: str
sample: "otc"
if_none_match:
description: "In combination with C(Expect: 100-Continue), specify an
C(If-None-Match: *) header to query whether the server
already has a copy of the object before any data is sent.
Only set when searching for a container by name."
type: str
sample: null
is_content_type_detected:
description: If set to C(true), Object Storage guesses the content type
based on the file extension and ignores the value sent in
the Content-Type header, if present.
Only fetched when searching for a container by name.
type: bool
sample: null
is_newest:
description: If set to True, Object Storage queries all replicas to
return the most recent one. If you omit this header, Object
Storage responds faster after it finds one valid replica.
Because setting this header to True is more expensive for
the back end, use it only when it is absolutely needed.
Only fetched when searching for a container by name.
type: bool
sample: null
meta_temp_url_key:
description: The secret key value for temporary URLs. If not set,
this header is not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
meta_temp_url_key_2:
description: A second secret key value for temporary URLs. If not set,
this header is not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
name:
description: The name of the container.
type: str
sample: "otc"
object_count:
description: The number of objects.
type: int
sample: 1
read_ACL:
description: The ACL that grants read access. If not set, this header is
not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
storage_policy:
description: Storage policy used by the container. It is not possible to
change policy of an existing container.
Only fetched when searching for a container by name.
type: str
sample: null
sync_key:
description: The secret key for container synchronization. If not set,
this header is not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
sync_to:
description: The destination for container synchronization. If not set,
this header is not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
timestamp:
description: The timestamp of the transaction.
Only fetched when searching for a container by name.
type: str
sample: null
versions_location:
description: Enables versioning on this container. The value is the name
of another container. You must UTF-8-encode and then
URL-encode the name before you include it in the header. To
disable versioning, set the header to an empty string.
Only fetched when searching for a container by name.
type: str
sample: null
write_ACL:
description: The ACL that grants write access. If not set, this header is
not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
"""
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import OpenStackModule
class ObjectContainersInfoModule(OpenStackModule):
argument_spec = dict(
name=dict(aliases=["container"]),
prefix=dict(),
)
module_kwargs = dict(
supports_check_mode=True,
)
def run(self):
if self.params["name"]:
containers = [
(
self.conn.object_store.get_container_metadata(
self.params["name"],
).to_dict(computed=False)
),
]
else:
query = {}
if self.params["prefix"]:
query["prefix"] = self.params["prefix"]
containers = [
c.to_dict(computed=False)
for c in self.conn.object_store.containers(**query)
]
self.exit(changed=False, containers=containers)
def main():
module = ObjectContainersInfoModule()
module()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,249 @@
# Copyright: (c) 2018, Terry Jones <terry.jones@example.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r'''
---
module: port_forwarding
short_description: Create/Update/Delete port forwarding resources from OpenStack
description:
- Create, Update and Remove Neutron floating IP port forwarding resources from OpenStack
- Port forwarding allows external traffic to reach instances behind a floating IP
author: OpenStack Ansible SIG
options:
external_protocol_port:
description:
- The external port number on the floating IP that will be forwarded
- Must be between 1 and 65535
- Required if C(port_forwarding_id) is set
type: int
aliases: ['external_port']
floating_ip:
description:
- The floating IP address or ID to create port forwarding on
type: str
required: true
aliases: ['floating_ip_address']
internal_ip:
description:
- The internal IP address to forward traffic to
- Must be one of the fixed IPs on the specified port
- If not specified, uses the first fixed IP of the port
- Requires C(network_port)
type: str
aliases: ['internal_ip_address']
internal_protocol_port:
description:
- The internal port number to forward traffic to
- Must be between 1 and 65535
- Required if C(port_forwarding_id) is set
type: int
aliases: ['internal_port']
network_port:
description:
- The Neutron port name or ID that contains the internal IP
- Required if C(port_forwarding_id) is set
type: str
port_forwarding_id:
description:
- ID of an existing port forwarding resource
- Used for updates and deletions when ID is known
type: str
protocol:
description:
- The IP protocol for the port forwarding resource
- Supports tcp and udp protocols
- Required if C(port_forwarding_id) is set
type: str
state:
description:
- Whether the port forwarding resource should exist or not
type: str
choices: ['present', 'absent']
default: present
extends_documentation_fragment:
- openstack.cloud.openstack
'''
EXAMPLES = r'''
- name: Create new port fowarding
openstack.cloud.port_forwarding:
state: present
floating_ip: 192.168.150.67
external_protocol_port: 80
internal_protocol_port: 8080
network_port: example_http_port
protocol: tcp
- name: Update previously created port forwarding
openstack.cloud.port_forwarding:
state: present
port_forwarding_id: existing_port_forwarding
floating_ip: 192.168.150.67
internal_protocol_port: 9090
- name: Delete port forwarding
openstack.cloud.port_forwarding:
state: absent
port_forwarding_id: "resource-id"
floating_ip: "203.0.113.100"
'''
RETURN = r'''
port_forwarding:
description: Dictionary describing the port forwarding resource.
type: list
elements: dict
returned: success
contains:
description:
description: The description of the port forwarding.
type: str
external_port:
description: The external port number.
type: int
floatingip_id:
description: The floating IP id associated with the port forwarding.
type: str
id:
description: The id of the port forwarding.
type: str
internal_ip_address:
description: The internal IP address associated with the port forwarding.
type: str
internal_port:
description: The internal port number.
type: int
internal_port_id:
description: The ID of the network port associated with the port forwarding.
type: str
protocol:
description: The IP protocol used for port forwarding.
type: str
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule
)
class PortForwardingModule(OpenStackModule):
argument_spec = dict(
external_protocol_port=dict(type='int', aliases=['external_port']),
floating_ip=dict(required=True, aliases=['floating_ip_address']),
internal_ip=dict(aliases=['internal_ip_address']),
internal_protocol_port=dict(type='int', aliases=['internal_port']),
network_port=dict(),
port_forwarding_id=dict(),
protocol=dict(),
state=dict(default='present', choices=['present', 'absent']),
)
module_kwargs = dict(
required_if=[
['port_forwarding_id', None, ['external_protocol_port',
'internal_protocol_port',
'network_port',
'protocol'], False],
],
required_by={
'internal_ip': ['network_port'],
},
)
def run(self):
port_forwarding_id = self.params['port_forwarding_id']
floating_ip = self.conn.network.find_ip(self.params['floating_ip'],
ignore_missing=False)
port = self.conn.network.find_port(self.params['network_port']) \
if self.params['network_port'] else None
internal_ip = self._find_internal_ip(port) if port else None
external_port = self.params['external_protocol_port']
internal_port = self.params['internal_protocol_port']
protocol = self.params['protocol']
state = self.params['state']
attrs = {}
if port is not None:
attrs['internal_port_id'] = port.id
if internal_ip is not None:
attrs['internal_ip_address'] = internal_ip
if external_port is not None:
attrs['external_port'] = external_port
if protocol is not None:
attrs['protocol'] = protocol
port_forwarding = self._find_port_forwarding(floating_ip.id,
port_forwarding_id,
attrs)
if internal_port is not None:
attrs['internal_port'] = internal_port
changed = False
if state == 'present':
if port_forwarding:
# found valid pfwd_id or pfwd with matching attributes
new_attrs = {k: v for k, v in attrs.items() if port_forwarding[k] != v}
if new_attrs:
port_forwarding = self.conn.network.update_port_forwarding(
port_forwarding.id, floating_ip.id, **new_attrs)
changed = True
elif not port_forwarding_id:
# pfwd_id not given, so create new pfwd
attrs['floatingip_id'] = floating_ip.id
port_forwarding = self.conn.network.create_port_forwarding(**attrs)
changed = True
self.exit_json(changed=changed, port_forwarding=port_forwarding)
else:
if port_forwarding:
self.conn.network.delete_port_forwarding(port_forwarding.id, floating_ip.id)
changed = True
self.exit_json(changed=changed)
def _find_internal_ip(self, port):
internal_ip = self.params['internal_ip']
if internal_ip:
for fixed_ip in port.fixed_ips:
if fixed_ip['ip_address'] == internal_ip:
return internal_ip
self.fail_json(
msg='Internal IP %s not found in port %s fixed IPs' % (internal_ip, port.id))
else:
if port.fixed_ips:
return port.fixed_ips[0]['ip_address']
else:
self.fail_json(msg='Port %s has no fixed IPs available' % port.id)
def _find_port_forwarding(self, fip_id, pf_id, attrs):
try:
if pf_id:
return self.conn.network.find_port_forwarding(pf_id, fip_id, ignore_missing=False)
port_forwardings = list(self.conn.network.port_forwardings(fip_id, **attrs))
if len(port_forwardings) > 1:
self.fail_json(
msg='Found more than one port forwarding resources with matching attributes')
return port_forwardings[0] if len(port_forwardings) == 1 else None
except self.sdk.exceptions.NotFoundException:
return None
def main():
module = PortForwardingModule()
module()
if __name__ == '__main__':
main()

View File

@@ -0,0 +1,148 @@
# Copyright: (c) 2018, Terry Jones <terry.jones@example.org>
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r'''
---
module: port_forwarding_info
short_description: Retrieve port forwarding resources from OpenStack.
description:
- Retrieve Neutron floating IP port forwarding resources from OpenStack.
author: OpenStack Ansible SIG
options:
external_port:
description:
- The external port number on the floating IP that will be forwarded.
type: int
floating_ip:
description:
- The address or ID of a floating IP that contains a port forwarding.
type: str
internal_port_id:
description:
- The Neutron port ID.
type: str
port_forwarding_id:
description:
- ID of an existing port forwarding resource.
type: str
protocol:
description:
- The IP protocol for the port forwarding resource.
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
'''
EXAMPLES = r'''
# Getting all port forwardings
- openstack.cloud.port_forwarding_info:
register: pfwds
# Getting port forwardings by associated floating ip
- openstack.cloud.port_forwarding_info:
floating_ip: 192.168.42.67
register: pfwds
# Getting port forwarding by port forwarding id
- openstack.cloud.port_forwarding_info:
port_forwarding_id: d09f88d6-bb20-4268-9139-27c1b82c51d0
register: pfwd
'''
RETURN = r'''
port_forwardings:
description: The port forwarding objects list.
type: list
elements: dict
returned: success
contains:
description:
description: The description of the port forwarding.
type: str
external_port:
description: The external port number.
type: int
floatingip_id:
description: The floating IP id associated with the port forwarding.
type: str
id:
description: The id of the port forwarding.
type: str
internal_ip_address:
description: The internal IP address associated with the port forwarding.
type: str
internal_port:
description: The internal port number.
type: int
internal_port_id:
description: The ID of the network port associated with the port forwarding.
type: str
protocol:
description: The IP protocol used for port forwarding.
type: str
'''
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule
)
class PortForwardingInfoModule(OpenStackModule):
argument_spec = dict(
external_port=dict(type='int'),
floating_ip=dict(),
internal_port_id=dict(),
port_forwarding_id=dict(),
protocol=dict(),
)
module_kwargs = dict(
supports_check_mode=True
)
def _find_port_forwardings(self):
port_forwarding_id = self.params['port_forwarding_id']
floating_ip = self.params['floating_ip']
query_kwargs = {k: self.params[k]
for k in ['external_port',
'internal_port_id',
'protocol']
if self.params[k] is not None}
floating_ips = None
if floating_ip:
fip = self.conn.network.find_ip(floating_ip)
floating_ips = [fip] if fip else []
else:
floating_ips = self.conn.network.ips()
port_forwardings = []
if port_forwarding_id is None:
for fip in floating_ips:
pfwds = self.conn.network.port_forwardings(fip.id, **query_kwargs)
port_forwardings.extend(list(pfwds))
else:
for fip in floating_ips:
pfwd = self.conn.network.find_port_forwarding(
port_forwarding_id, fip.id, query_kwargs)
if pfwd:
return [pfwd]
return port_forwardings
def run(self):
port_forwardings = [pfwd.to_dict(computed=False)
for pfwd in self._find_port_forwardings()]
self.exit(changed=False, port_forwardings=port_forwardings)
def main():
module = PortForwardingInfoModule()
module()
if __name__ == '__main__':
main()

View File

@@ -181,7 +181,7 @@ class IdentityProjectModule(OpenStackModule):
raise ValueError('Duplicate key(s) in extra_specs: {0}'
.format(', '.join(list(duplicate_keys))))
for k, v in extra_specs.items():
if v != project[k]:
if k not in project or v != project[k]:
attributes[k] = v
if attributes:

View File

@@ -239,7 +239,11 @@ class DnsRecordsetModule(OpenStackModule):
elif self._needs_update(kwargs, recordset):
recordset = self.conn.dns.update_recordset(recordset, **kwargs)
changed = True
self.exit_json(changed=changed, recordset=recordset)
# NOTE(gtema): this is a workaround to temporarily bring the
# zone_id param back which may not me populated by SDK
rs = recordset.to_dict(computed=False)
rs["zone_id"] = zone.id
self.exit_json(changed=changed, recordset=rs)
elif state == 'absent' and recordset is not None:
self.conn.dns.delete_recordset(recordset)
changed = True

View File

@@ -19,7 +19,9 @@ options:
- Valid only with keystone version 3.
- Required if I(project) is not specified.
- When I(project) is specified, then I(domain) will not be used for
scoping the role association, only for finding resources.
scoping the role association, only for finding resources. Deprecated
for finding resources, please use I(group_domain), I(project_domain),
I(role_domain), or I(user_domain).
- "When scoping the role association, I(project) has precedence over
I(domain) and I(domain) has precedence over I(system): When I(project)
is specified, then I(domain) and I(system) are not used for role
@@ -32,24 +34,45 @@ options:
- Valid only with keystone version 3.
- If I(group) is not specified, then I(user) is required. Both may not be
specified at the same time.
- You can supply I(group_domain) or the deprecated usage of I(domain) to
find group resources.
type: str
group_domain:
description:
- Name or ID for the domain.
- Valid only with keystone version 3.
- Only valid for finding group resources.
type: str
project:
description:
- Name or ID of the project to scope the role association to.
- If you are using keystone version 2, then this value is required.
- When I(project) is specified, then I(domain) will not be used for
scoping the role association, only for finding resources.
scoping the role association, only for finding resources. Prefer
I(group_domain) over I(domain).
- "When scoping the role association, I(project) has precedence over
I(domain) and I(domain) has precedence over I(system): When I(project)
is specified, then I(domain) and I(system) are not used for role
association. When I(domain) is specified, then I(system) will not be
used for role association."
type: str
project_domain:
description:
- Name or ID for the domain.
- Valid only with keystone version 3.
- Only valid for finding project resources.
type: str
role:
description:
- Name or ID for the role.
required: true
type: str
role_domain:
description:
- Name or ID for the domain.
- Valid only with keystone version 3.
- Only valid for finding role resources.
type: str
state:
description:
- Should the roles be present or absent on the user.
@@ -73,6 +96,12 @@ options:
- If I(user) is not specified, then I(group) is required. Both may not be
specified at the same time.
type: str
user_domain:
description:
- Name or ID for the domain.
- Valid only with keystone version 3.
- Only valid for finding user resources.
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
'''
@@ -101,11 +130,15 @@ class IdentityRoleAssignmentModule(OpenStackModule):
argument_spec = dict(
domain=dict(),
group=dict(),
group_domain=dict(type='str'),
project=dict(),
project_domain=dict(type='str'),
role=dict(required=True),
role_domain=dict(type='str'),
state=dict(default='present', choices=['absent', 'present']),
system=dict(),
user=dict(),
user_domain=dict(type='str'),
)
module_kwargs = dict(
@@ -113,17 +146,33 @@ class IdentityRoleAssignmentModule(OpenStackModule):
('user', 'group'),
('domain', 'project', 'system'),
],
mutually_exclusive=[
('user', 'group'),
('project', 'system'), # domain should be part of this
],
supports_check_mode=True
)
def _find_domain_id(self, domain):
if domain is not None:
domain = self.conn.identity.find_domain(domain,
ignore_missing=False)
return dict(domain_id=domain['id'])
return dict()
def run(self):
filters = {}
find_filters = {}
kwargs = {}
group_find_filters = {}
project_find_filters = {}
role_find_filters = {}
user_find_filters = {}
role_find_filters.update(self._find_domain_id(
self.params['role_domain']))
role_name_or_id = self.params['role']
role = self.conn.identity.find_role(role_name_or_id,
ignore_missing=False)
ignore_missing=False,
**role_find_filters)
filters['role_id'] = role['id']
domain_name_or_id = self.params['domain']
@@ -131,22 +180,31 @@ class IdentityRoleAssignmentModule(OpenStackModule):
domain = self.conn.identity.find_domain(
domain_name_or_id, ignore_missing=False)
filters['scope_domain_id'] = domain['id']
find_filters['domain_id'] = domain['id']
kwargs['domain'] = domain['id']
group_find_filters['domain_id'] = domain['id']
project_find_filters['domain_id'] = domain['id']
user_find_filters['domain_id'] = domain['id']
user_name_or_id = self.params['user']
if user_name_or_id is not None:
user_find_filters.update(self._find_domain_id(
self.params['user_domain']))
user = self.conn.identity.find_user(
user_name_or_id, ignore_missing=False, **find_filters)
user_name_or_id, ignore_missing=False,
**user_find_filters)
filters['user_id'] = user['id']
kwargs['user'] = user['id']
else:
user = None
group_name_or_id = self.params['group']
if group_name_or_id is not None:
group_find_filters.update(self._find_domain_id(
self.params['group_domain']))
group = self.conn.identity.find_group(
group_name_or_id, ignore_missing=False, **find_filters)
group_name_or_id, ignore_missing=False,
**group_find_filters)
filters['group_id'] = group['id']
kwargs['group'] = group['id']
else:
group = None
system_name = self.params['system']
if system_name is not None:
@@ -154,14 +212,14 @@ class IdentityRoleAssignmentModule(OpenStackModule):
if 'scope_domain_id' not in filters:
filters['scope.system'] = system_name
kwargs['system'] = system_name
project_name_or_id = self.params['project']
if project_name_or_id is not None:
project_find_filters.update(self._find_domain_id(
self.params['project_domain']))
project = self.conn.identity.find_project(
project_name_or_id, ignore_missing=False, **find_filters)
project_name_or_id, ignore_missing=False,
**project_find_filters)
filters['scope_project_id'] = project['id']
kwargs['project'] = project['id']
# project has precedence over domain and system
filters.pop('scope_domain_id', None)
@@ -176,10 +234,50 @@ class IdentityRoleAssignmentModule(OpenStackModule):
or (state == 'absent' and role_assignments)))
if state == 'present' and not role_assignments:
self.conn.grant_role(role['id'], **kwargs)
if 'scope_domain_id' in filters:
if user is not None:
self.conn.identity.assign_domain_role_to_user(
filters['scope_domain_id'], user, role)
else:
self.conn.identity.assign_domain_role_to_group(
filters['scope_domain_id'], group, role)
elif 'scope_project_id' in filters:
if user is not None:
self.conn.identity.assign_project_role_to_user(
filters['scope_project_id'], user, role)
else:
self.conn.identity.assign_project_role_to_group(
filters['scope_project_id'], group, role)
elif 'scope.system' in filters:
if user is not None:
self.conn.identity.assign_system_role_to_user(
user, role, filters['scope.system'])
else:
self.conn.identity.assign_system_role_to_group(
group, role, filters['scope.system'])
self.exit_json(changed=True)
elif state == 'absent' and role_assignments:
self.conn.revoke_role(role['id'], **kwargs)
if 'scope_domain_id' in filters:
if user is not None:
self.conn.identity.unassign_domain_role_from_user(
filters['scope_domain_id'], user, role)
else:
self.conn.identity.unassign_domain_role_from_group(
filters['scope_domain_id'], group, role)
elif 'scope_project_id' in filters:
if user is not None:
self.conn.identity.unassign_project_role_from_user(
filters['scope_project_id'], user, role)
else:
self.conn.identity.unassign_project_role_from_group(
filters['scope_project_id'], group, role)
elif 'scope.system' in filters:
if user is not None:
self.conn.identity.unassign_system_role_from_user(
user, role, filters['scope.system'])
else:
self.conn.identity.unassign_system_role_from_group(
group, role, filters['scope.system'])
self.exit_json(changed=True)
else:
self.exit_json(changed=False)

View File

@@ -372,6 +372,10 @@ class RouterModule(OpenStackModule):
for p in external_fixed_ips:
if 'ip_address' in p:
req_fip_map[p['subnet_id']].add(p['ip_address'])
elif p['subnet_id'] in cur_fip_map:
# handle idempotence of updating with no explicit ip
req_fip_map[p['subnet_id']].update(
cur_fip_map[p['subnet_id']])
# Check if external ip addresses need to be added
for fip in external_fixed_ips:
@@ -464,7 +468,7 @@ class RouterModule(OpenStackModule):
subnet = self.conn.network.find_subnet(
iface['subnet_id'], ignore_missing=False, **filters)
fip = dict(subnet_id=subnet.id)
if 'ip_address' in iface:
if iface.get('ip_address', None) is not None:
fip['ip_address'] = iface['ip_address']
external_fixed_ips.append(fip)

View File

@@ -195,10 +195,12 @@ options:
added.
- On server creation, if I(security_groups) is omitted, the API creates
the server in the default security group.
- Requested security groups are not applied to pre-existing ports.
- On server creation, requested security groups are not applied to
pre-existing ports.
- On update, if I(security_groups) is set, the security groups are
applied to all attached ports.
type: list
elements: str
default: []
state:
description:
- Should the resource be C(present) or C(absent).
@@ -830,7 +832,7 @@ class ServerModule(OpenStackModule):
nics=dict(default=[], type='list', elements='raw'),
reuse_ips=dict(default=True, type='bool'),
scheduler_hints=dict(type='dict'),
security_groups=dict(default=[], type='list', elements='str'),
security_groups=dict(type='list', elements='str'),
state=dict(default='present', choices=['absent', 'present']),
tags=dict(type='list', default=[], elements='str'),
terminate_volume=dict(default=False, type='bool'),
@@ -952,6 +954,9 @@ class ServerModule(OpenStackModule):
def _build_update_security_groups(self, server):
update = {}
if self.params['security_groups'] is None:
return update
required_security_groups = dict(
(sg['id'], sg) for sg in [
self.conn.network.find_security_group(

View File

@@ -136,6 +136,9 @@ class ServerActionModule(OpenStackModule):
# rebuild does not depend on state
will_change = (
(action == 'rebuild')
# `reboot_*` actions do not change state, servers remain `ACTIVE`
or (action == 'reboot_hard')
or (action == 'reboot_soft')
or (action == 'lock' and not server['is_locked'])
or (action == 'unlock' and server['is_locked'])
or server.status.lower() not in [a.lower()

View File

@@ -0,0 +1,520 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2025 VEXXHOST, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r"""
---
module: share_type
short_description: Manage OpenStack share type
author: OpenStack Ansible SIG
description:
- Add, remove or update share types in OpenStack Manila.
options:
name:
description:
- Share type name or id.
- For private share types, the UUID must be used instead of name.
required: true
type: str
description:
description:
- Description of the share type.
type: str
extra_specs:
description:
- Dictionary of share type extra specifications
type: dict
is_public:
description:
- Make share type accessible to the public.
- Can be updated after creation using Manila API direct updates.
type: bool
default: true
driver_handles_share_servers:
description:
- Boolean flag indicating whether share servers are managed by the driver.
- Required for share type creation.
- This is automatically added to extra_specs as 'driver_handles_share_servers'.
type: bool
default: true
state:
description:
- Indicate desired state of the resource.
choices: ['present', 'absent']
default: present
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
"""
EXAMPLES = r"""
- name: Delete share type by name
openstack.cloud.share_type:
name: test_share_type
state: absent
- name: Delete share type by id
openstack.cloud.share_type:
name: fbadfa6b-5f17-4c26-948e-73b94de57b42
state: absent
- name: Create share type
openstack.cloud.share_type:
name: manila-generic-share
state: present
driver_handles_share_servers: true
extra_specs:
share_backend_name: GENERIC_BACKEND
snapshot_support: true
create_share_from_snapshot_support: true
description: Generic share type
is_public: true
"""
RETURN = """
share_type:
description: Dictionary describing share type
returned: On success when I(state) is 'present'
type: dict
contains:
name:
description: share type name
returned: success
type: str
sample: manila-generic-share
extra_specs:
description: share type extra specifications
returned: success
type: dict
sample: {"share_backend_name": "GENERIC_BACKEND", "snapshot_support": "true"}
is_public:
description: whether the share type is public
returned: success
type: bool
sample: True
description:
description: share type description
returned: success
type: str
sample: Generic share type
driver_handles_share_servers:
description: whether driver handles share servers
returned: success
type: bool
sample: true
id:
description: share type uuid
returned: success
type: str
sample: b75d8c5c-a6d8-4a5d-8c86-ef4f1298525d
"""
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule,
)
# Manila API microversion 2.50 provides complete share type information
# including is_default field and description
# Reference: https://docs.openstack.org/api-ref/shared-file-system/#show-share-type-detail
MANILA_MICROVERSION = "2.50"
class ShareTypeModule(OpenStackModule):
argument_spec = dict(
name=dict(type="str", required=True),
description=dict(type="str", required=False),
extra_specs=dict(type="dict", required=False),
is_public=dict(type="bool", default=True),
driver_handles_share_servers=dict(type="bool", default=True),
state=dict(type="str", default="present", choices=["absent", "present"]),
)
module_kwargs = dict(
required_if=[("state", "present", ["driver_handles_share_servers"])],
supports_check_mode=True,
)
@staticmethod
def _extract_result(details):
if details is not None:
if hasattr(details, "to_dict"):
result = details.to_dict(computed=False)
elif isinstance(details, dict):
result = details.copy()
else:
result = dict(details) if details else {}
# Normalize is_public field from API response
if result and "os-share-type-access:is_public" in result:
result["is_public"] = result["os-share-type-access:is_public"]
elif result and "share_type_access:is_public" in result:
result["is_public"] = result["share_type_access:is_public"]
return result
return {}
def _find_share_type(self, name_or_id):
"""
Find share type by name or ID with comprehensive information.
Uses direct Manila API calls since SDK methods are not available.
Handles both public and private share types.
"""
# Try direct access first for complete information
share_type = self._find_by_direct_access(name_or_id)
if share_type:
return share_type
# If direct access fails, try searching in public listing
# This handles cases where we have the name but need to find the ID
try:
response = self.conn.shared_file_system.get("/types")
share_types = response.json().get("share_types", [])
for share_type in share_types:
if share_type["name"] == name_or_id or share_type["id"] == name_or_id:
# Found by name, now get complete info using the ID
result = self._find_by_direct_access(share_type["id"])
if result:
return result
except Exception:
pass
return None
def _find_by_direct_access(self, name_or_id):
"""
Find share type by direct access using Manila API.
Uses microversion to get complete information including description and is_default.
Falls back to basic API if microversion is not supported.
"""
# Try with microversion first for complete information
try:
response = self.conn.shared_file_system.get(
f"/types/{name_or_id}", microversion=MANILA_MICROVERSION
)
share_type_data = response.json().get("share_type", {})
if share_type_data:
return share_type_data
except Exception:
pass
# Fallback: try without microversion for basic information
try:
response = self.conn.shared_file_system.get(f"/types/{name_or_id}")
share_type_data = response.json().get("share_type", {})
if share_type_data:
return share_type_data
except Exception:
pass
return None
def run(self):
state = self.params["state"]
name_or_id = self.params["name"]
# Find existing share type (similar to volume_type.py pattern)
share_type = self._find_share_type(name_or_id)
if self.ansible.check_mode:
self.exit_json(changed=self._will_change(state, share_type))
if state == "present" and not share_type:
# Create type
create_result = self._create()
share_type = self._extract_result(create_result)
self.exit_json(changed=True, share_type=share_type)
elif state == "present" and share_type:
# Update type
update = self._build_update(share_type)
update_result = self._update(share_type, update)
share_type = self._extract_result(update_result)
self.exit_json(changed=bool(update), share_type=share_type)
elif state == "absent" and share_type:
# Delete type
self._delete(share_type)
self.exit_json(changed=True)
else:
# state == 'absent' and not share_type
self.exit_json(changed=False)
def _build_update(self, share_type):
return {
**self._build_update_extra_specs(share_type),
**self._build_update_share_type(share_type),
}
def _build_update_extra_specs(self, share_type):
update = {}
old_extra_specs = share_type.get("extra_specs", {})
# Build the complete new extra specs including driver_handles_share_servers
new_extra_specs = {}
# Add driver_handles_share_servers (always required)
if self.params.get("driver_handles_share_servers") is not None:
new_extra_specs["driver_handles_share_servers"] = str(
self.params["driver_handles_share_servers"]
).title()
# Add user-defined extra specs
if self.params.get("extra_specs"):
new_extra_specs.update(
{k: str(v) for k, v in self.params["extra_specs"].items()}
)
delete_extra_specs_keys = set(old_extra_specs.keys()) - set(
new_extra_specs.keys()
)
if delete_extra_specs_keys:
update["delete_extra_specs_keys"] = delete_extra_specs_keys
if old_extra_specs != new_extra_specs:
update["create_extra_specs"] = new_extra_specs
return update
def _build_update_share_type(self, share_type):
update = {}
# Only allow description updates - name is used for identification
allowed_attributes = ["description"]
# Handle is_public updates - CLI supports this, so we should too
# Always check is_public since it has a default value of True
current_is_public = share_type.get(
"os-share-type-access:is_public",
share_type.get("share_type_access:is_public"),
)
requested_is_public = self.params["is_public"] # Will be True by default now
if current_is_public != requested_is_public:
# Mark this as needing a special access update
update["update_access"] = {
"is_public": requested_is_public,
"share_type_id": share_type.get("id"),
}
type_attributes = {
k: self.params[k]
for k in allowed_attributes
if k in self.params
and self.params.get(k) is not None
and self.params.get(k) != share_type.get(k)
}
if type_attributes:
update["type_attributes"] = type_attributes
return update
def _create(self):
share_type_attrs = {"name": self.params["name"]}
if self.params.get("description") is not None:
share_type_attrs["description"] = self.params["description"]
# Handle driver_handles_share_servers - this is the key required parameter
extra_specs = {}
if self.params.get("driver_handles_share_servers") is not None:
extra_specs["driver_handles_share_servers"] = str(
self.params["driver_handles_share_servers"]
).title()
# Add user-defined extra specs
if self.params.get("extra_specs"):
extra_specs.update(
{k: str(v) for k, v in self.params["extra_specs"].items()}
)
if extra_specs:
share_type_attrs["extra_specs"] = extra_specs
# Handle is_public parameter - field name depends on API version
if self.params.get("is_public") is not None:
# For microversion (API 2.7+), use share_type_access:is_public
# For older versions, use os-share-type-access:is_public
share_type_attrs["share_type_access:is_public"] = self.params["is_public"]
# Also include legacy field for compatibility
share_type_attrs["os-share-type-access:is_public"] = self.params[
"is_public"
]
try:
payload = {"share_type": share_type_attrs}
# Try with microversion first (supports share_type_access:is_public)
try:
response = self.conn.shared_file_system.post(
"/types", json=payload, microversion=MANILA_MICROVERSION
)
share_type_data = response.json().get("share_type", {})
except Exception:
# Fallback: try without microversion (uses os-share-type-access:is_public)
# Remove the newer field name for older API compatibility
if "share_type_access:is_public" in share_type_attrs:
del share_type_attrs["share_type_access:is_public"]
payload = {"share_type": share_type_attrs}
response = self.conn.shared_file_system.post("/types", json=payload)
share_type_data = response.json().get("share_type", {})
return share_type_data
except Exception as e:
self.fail_json(msg=f"Failed to create share type: {str(e)}")
def _delete(self, share_type):
# Use direct API call since SDK method may not exist
try:
share_type_id = (
share_type.get("id") if isinstance(share_type, dict) else share_type.id
)
# Try with microversion first, fallback if not supported
try:
self.conn.shared_file_system.delete(
f"/types/{share_type_id}", microversion=MANILA_MICROVERSION
)
except Exception:
self.conn.shared_file_system.delete(f"/types/{share_type_id}")
except Exception as e:
self.fail_json(msg=f"Failed to delete share type: {str(e)}")
def _update(self, share_type, update):
if not update:
return share_type
share_type = self._update_share_type(share_type, update)
share_type = self._update_extra_specs(share_type, update)
share_type = self._update_access(share_type, update)
return share_type
def _update_extra_specs(self, share_type, update):
share_type_id = (
share_type.get("id") if isinstance(share_type, dict) else share_type.id
)
delete_extra_specs_keys = update.get("delete_extra_specs_keys")
if delete_extra_specs_keys:
for key in delete_extra_specs_keys:
try:
# Try with microversion first, fallback if not supported
try:
self.conn.shared_file_system.delete(
f"/types/{share_type_id}/extra_specs/{key}",
microversion=MANILA_MICROVERSION,
)
except Exception:
self.conn.shared_file_system.delete(
f"/types/{share_type_id}/extra_specs/{key}"
)
except Exception as e:
self.fail_json(msg=f"Failed to delete extra spec '{key}': {str(e)}")
# refresh share_type information
share_type = self._find_share_type(share_type_id)
create_extra_specs = update.get("create_extra_specs")
if create_extra_specs:
# Convert values to strings as Manila API expects string values
string_specs = {k: str(v) for k, v in create_extra_specs.items()}
try:
# Try with microversion first, fallback if not supported
try:
self.conn.shared_file_system.post(
f"/types/{share_type_id}/extra_specs",
json={"extra_specs": string_specs},
microversion=MANILA_MICROVERSION,
)
except Exception:
self.conn.shared_file_system.post(
f"/types/{share_type_id}/extra_specs",
json={"extra_specs": string_specs},
)
except Exception as e:
self.fail_json(msg=f"Failed to update extra specs: {str(e)}")
# refresh share_type information
share_type = self._find_share_type(share_type_id)
return share_type
def _update_access(self, share_type, update):
"""Update share type access (public/private) using direct API update"""
access_update = update.get("update_access")
if not access_update:
return share_type
share_type_id = access_update["share_type_id"]
is_public = access_update["is_public"]
try:
# Use direct update with share_type_access:is_public (works for both public and private)
update_payload = {"share_type": {"share_type_access:is_public": is_public}}
try:
self.conn.shared_file_system.put(
f"/types/{share_type_id}",
json=update_payload,
microversion=MANILA_MICROVERSION,
)
except Exception:
# Fallback: try with legacy field name for older API versions
update_payload = {
"share_type": {"os-share-type-access:is_public": is_public}
}
self.conn.shared_file_system.put(
f"/types/{share_type_id}", json=update_payload
)
# Refresh share type information after access change
share_type = self._find_share_type(share_type_id)
except Exception as e:
self.fail_json(msg=f"Failed to update share type access: {str(e)}")
return share_type
def _update_share_type(self, share_type, update):
type_attributes = update.get("type_attributes")
if type_attributes:
share_type_id = (
share_type.get("id") if isinstance(share_type, dict) else share_type.id
)
try:
# Try with microversion first, fallback if not supported
try:
response = self.conn.shared_file_system.put(
f"/types/{share_type_id}",
json={"share_type": type_attributes},
microversion=MANILA_MICROVERSION,
)
except Exception:
response = self.conn.shared_file_system.put(
f"/types/{share_type_id}", json={"share_type": type_attributes}
)
updated_type = response.json().get("share_type", {})
return updated_type
except Exception as e:
self.fail_json(msg=f"Failed to update share type: {str(e)}")
return share_type
def _will_change(self, state, share_type):
if state == "present" and not share_type:
return True
if state == "present" and share_type:
return bool(self._build_update(share_type))
if state == "absent" and share_type:
return True
return False
def main():
module = ShareTypeModule()
module()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,239 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2025 VEXXHOST, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r"""
---
module: share_type_info
short_description: Get OpenStack share type details
author: OpenStack Ansible SIG
description:
- Get share type details in OpenStack Manila.
- Get share type access details for private share types.
- Uses Manila API microversion 2.50 to retrieve complete share type information including is_default field.
- Safely falls back to basic information if microversion 2.50 is not supported by the backend.
- Private share types can only be accessed by UUID.
options:
name:
description:
- Share type name or id.
- For private share types, the UUID must be used instead of name.
required: true
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
"""
EXAMPLES = r"""
- name: Get share type details
openstack.cloud.share_type_info:
name: manila-generic-share
- name: Get share type details by id
openstack.cloud.share_type_info:
name: fbadfa6b-5f17-4c26-948e-73b94de57b42
"""
RETURN = """
share_type:
description: Dictionary describing share type
returned: On success
type: dict
contains:
id:
description: share type uuid
returned: success
type: str
sample: 59575cfc-3582-4efc-8eee-f47fcb25ea6b
name:
description: share type name
returned: success
type: str
sample: default
description:
description:
- share type description
- Available when Manila API microversion 2.50 is supported
- Falls back to empty string if microversion is not available
returned: success
type: str
sample: "Default Manila share type"
is_default:
description:
- whether this is the default share type
- Retrieved from the API response when microversion 2.50 is supported
- Falls back to null if microversion is not available or field is not present
returned: success
type: bool
sample: true
is_public:
description: whether the share type is public (true) or private (false)
returned: success
type: bool
sample: true
required_extra_specs:
description: Required extra specifications for the share type
returned: success
type: dict
sample: {"driver_handles_share_servers": "True"}
optional_extra_specs:
description: Optional extra specifications for the share type
returned: success
type: dict
sample: {"snapshot_support": "True", "create_share_from_snapshot_support": "True"}
"""
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule,
)
# Manila API microversion 2.50 provides complete share type information
# including is_default field and description
# Reference: https://docs.openstack.org/api-ref/shared-file-system/#show-share-type-detail
MANILA_MICROVERSION = "2.50"
class ShareTypeInfoModule(OpenStackModule):
argument_spec = dict(name=dict(type="str", required=True))
module_kwargs = dict(
supports_check_mode=True,
)
def __init__(self, **kwargs):
super(ShareTypeInfoModule, self).__init__(**kwargs)
def _find_share_type(self, name_or_id):
"""
Find share type by name or ID with comprehensive information.
"""
share_type = self._find_by_direct_access(name_or_id)
if share_type:
return share_type
# If direct access fails, try searching in public listing
# This handles cases where we have the name but need to find the ID
try:
response = self.conn.shared_file_system.get("/types")
share_types = response.json().get("share_types", [])
for share_type in share_types:
if share_type["name"] == name_or_id or share_type["id"] == name_or_id:
# Found by name, now get complete info using the ID
result = self._find_by_direct_access(share_type["id"])
if result:
return result
except Exception:
pass
return None
def _find_by_direct_access(self, name_or_id):
"""
Find share type by direct access (for private share types).
"""
try:
response = self.conn.shared_file_system.get(
f"/types/{name_or_id}", microversion=MANILA_MICROVERSION
)
share_type_data = response.json().get("share_type", {})
if share_type_data:
return share_type_data
except Exception:
pass
# Fallback: try without microversion for basic information
try:
response = self.conn.shared_file_system.get(f"/types/{name_or_id}")
share_type_data = response.json().get("share_type", {})
if share_type_data:
return share_type_data
except Exception:
pass
return None
def _normalize_share_type_dict(self, share_type_dict):
"""
Normalize share type dictionary to match CLI output format.
"""
# Extract extra specs information
extra_specs = share_type_dict.get("extra_specs", {})
required_extra_specs = share_type_dict.get("required_extra_specs", {})
# Optional extra specs are those in extra_specs but not in required_extra_specs
optional_extra_specs = {
key: value
for key, value in extra_specs.items()
if key not in required_extra_specs
}
# Determine if this is the default share type
# Use the is_default field from API response (available with microversion 2.50)
# If not available (older API versions), default to None
is_default = share_type_dict.get("is_default", None)
# Handle the description field - available through microversion 2.50
# Convert None to empty string if API returns null
description = share_type_dict.get("description") or ""
# Determine visibility - check both new and legacy field names
# Use the same logic as share_type.py for consistency
is_public = share_type_dict.get(
"os-share-type-access:is_public",
share_type_dict.get("share_type_access:is_public"),
)
# Build the normalized dictionary matching CLI output
normalized = {
"id": share_type_dict.get("id"),
"name": share_type_dict.get("name"),
"is_public": is_public,
"is_default": is_default,
"required_extra_specs": required_extra_specs,
"optional_extra_specs": optional_extra_specs,
"description": description,
}
return normalized
def run(self):
"""
Main execution method following OpenStackModule pattern.
Retrieves share type information using Manila API microversion for complete
details including description and is_default fields. Falls back gracefully to
basic API calls if microversion is not supported by the backend.
"""
name_or_id = self.params["name"]
share_type = self._find_share_type(name_or_id)
if not share_type:
self.fail_json(
msg=f"Share type '{name_or_id}' not found. "
f"If this is a private share type, use its UUID instead of name."
)
if hasattr(share_type, "to_dict"):
share_type_dict = share_type.to_dict()
elif isinstance(share_type, dict):
share_type_dict = share_type
else:
share_type_dict = dict(share_type) if share_type else {}
# Normalize the output to match CLI format
normalized_share_type = self._normalize_share_type_dict(share_type_dict)
# Return results in the standard format
result = dict(changed=False, share_type=normalized_share_type)
return result
def main():
module = ShareTypeInfoModule()
module()
if __name__ == "__main__":
main()

View File

@@ -229,8 +229,10 @@ class StackInfoModule(OpenStackModule):
if self.params[k] is not None:
kwargs[k] = self.params[k]
stacks = [stack.to_dict(computed=False)
for stack in self.conn.orchestration.stacks(**kwargs)]
stacks = []
for stack in self.conn.orchestration.stacks(**kwargs):
stack_obj = self.conn.orchestration.get_stack(stack.id)
stacks.append(stack_obj.to_dict(computed=False))
self.exit_json(changed=False, stacks=stacks)

View File

@@ -115,6 +115,10 @@ options:
- Required when I(state) is 'present'
aliases: ['network_name']
type: str
network_segment:
description:
- Name or id of the network segment to which the subnet should be associated
type: str
project:
description:
- Project name or ID containing the subnet (name admin-only)
@@ -294,6 +298,7 @@ class SubnetModule(OpenStackModule):
argument_spec = dict(
name=dict(required=True),
network=dict(aliases=['network_name']),
network_segment=dict(),
cidr=dict(),
description=dict(),
ip_version=dict(type='int', default=4, choices=[4, 6]),
@@ -369,9 +374,11 @@ class SubnetModule(OpenStackModule):
return [dict(start=pool_start, end=pool_end)]
return None
def _build_params(self, network, project, subnet_pool):
def _build_params(self, network, segment, project, subnet_pool):
params = {attr: self.params[attr] for attr in self.attr_params}
params['network_id'] = network.id
if segment:
params['segment_id'] = segment.id
if project:
params['project_id'] = project.id
if subnet_pool:
@@ -382,6 +389,8 @@ class SubnetModule(OpenStackModule):
params['allocation_pools'] = self.params['allocation_pools']
params = self._add_extra_attrs(params)
params = {k: v for k, v in params.items() if v is not None}
if self.params['disable_gateway_ip']:
params['gateway_ip'] = None
return params
def _build_updates(self, subnet, params):
@@ -414,6 +423,7 @@ class SubnetModule(OpenStackModule):
def run(self):
state = self.params['state']
network_name_or_id = self.params['network']
network_segment_name_or_id = self.params['network_segment']
project_name_or_id = self.params['project']
subnet_pool_name_or_id = self.params['subnet_pool']
subnet_name = self.params['name']
@@ -442,6 +452,13 @@ class SubnetModule(OpenStackModule):
**filters)
filters['network_id'] = network.id
segment = None
if network_segment_name_or_id:
segment = self.conn.network.find_segment(network_segment_name_or_id,
ignore_missing=False,
**filters)
filters['segment_id'] = segment.id
subnet_pool = None
if subnet_pool_name_or_id:
subnet_pool = self.conn.network.find_subnet_pool(
@@ -458,7 +475,7 @@ class SubnetModule(OpenStackModule):
changed = False
if state == 'present':
params = self._build_params(network, project, subnet_pool)
params = self._build_params(network, segment, project, subnet_pool)
if subnet is None:
subnet = self.conn.network.create_subnet(**params)
changed = True

View File

@@ -201,6 +201,11 @@ class TrunkModule(OpenStackModule):
if state == 'present' and not trunk:
# create trunk
trunk = self._create(name_or_id, port)
# add sub ports
update = self._build_update(trunk, sub_ports)
trunk = self._update(trunk, update)
self.exit_json(changed=True,
trunk=trunk.to_dict(computed=False))
elif state == 'present' and trunk:
@@ -232,7 +237,7 @@ class TrunkModule(OpenStackModule):
if found is False:
psp = self.params['sub_ports'] or []
for k in psp:
if sp['name'] == k['port']:
if sp['name'] == k['port'] or sp['id'] == k['port']:
spobj = {
'port_id': sp['id'],
'segmentation_type': k['segmentation_type'],

View File

@@ -15,16 +15,19 @@ options:
availability_zone:
description:
- The availability zone.
- This attribute cannot be updated.
type: str
description:
description:
- String describing the volume
- This attribute cannot be updated.
type: str
aliases: [display_description]
image:
description:
- Image name or id for boot from volume
- Mutually exclusive with I(snapshot) and I(volume)
- This attribute cannot be updated.
type: str
is_bootable:
description:
@@ -40,30 +43,36 @@ options:
- Note that support for multiattach volumes depends on the volume
type being used.
- "Cinder's default for I(is_multiattach) is C(false)."
- This attribute cannot be updated.
type: bool
metadata:
description:
- Metadata for the volume
- This attribute cannot be updated.
type: dict
name:
description:
- Name of volume
- This attribute cannot be updated.
required: true
type: str
aliases: [display_name]
scheduler_hints:
description:
- Scheduler hints passed to volume API in form of dict
- This attribute cannot be updated.
type: dict
size:
description:
- Size of volume in GB. This parameter is required when the
I(state) parameter is 'present'.
- This attribute can only be updated to a larger size.
type: int
snapshot:
description:
- Volume snapshot name or id to create from
- Mutually exclusive with I(image) and I(volume)
- This attribute cannot be updated.
type: str
aliases: [snapshot_id]
state:
@@ -76,10 +85,12 @@ options:
description:
- Volume name or id to create from
- Mutually exclusive with I(image) and I(snapshot)
- This attribute cannot be updated.
type: str
volume_type:
description:
- Volume type for volume
- This attribute cannot be updated.
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
@@ -238,16 +249,13 @@ class VolumeModule(OpenStackModule):
)
def _build_update(self, volume):
keys = ('size',)
keys = ('size', 'is_bootable')
return {k: self.params[k] for k in keys if self.params[k] is not None
and self.params[k] != volume[k]}
def _update(self, volume):
'''
modify volume, the only modification to an existing volume
available at the moment is extending the size, this is
limited by the openstacksdk and may change whenever the
functionality is extended.
modify volume. If the size has changed, it can only be extended.
'''
diff = {'before': volume.to_dict(computed=False), 'after': ''}
diff['after'] = diff['before']
@@ -259,15 +267,19 @@ class VolumeModule(OpenStackModule):
volume=volume.to_dict(computed=False), diff=diff)
if self.ansible.check_mode:
volume.size = update['size']
for k, v in update:
volume[k] = v
self.exit_json(changed=False,
volume=volume.to_dict(computed=False), diff=diff)
if 'size' in update and update['size'] != volume.size:
size = update['size']
self.conn.volume.extend_volume(volume.id, size)
volume = self.conn.block_storage.get_volume(volume)
if 'is_bootable' in update and update['is_bootable'] != volume.is_bootable:
self.conn.volume.set_volume_bootable_status(volume, update['is_bootable'])
volume = self.conn.block_storage.get_volume(volume)
volume = volume.to_dict(computed=False)
diff['after'] = volume
self.exit_json(changed=True, volume=volume, diff=diff)
@@ -310,6 +322,10 @@ class VolumeModule(OpenStackModule):
self.conn.block_storage.wait_for_status(
volume, wait=self.params['timeout'])
if self.params['is_bootable']:
self.conn.volume.set_volume_bootable_status(volume, True)
volume.is_bootable = True
volume = volume.to_dict(computed=False)
diff['after'] = volume
self.exit_json(changed=True, volume=volume, diff=diff)

View File

@@ -0,0 +1,97 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2025 by Pure Storage, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r"""
---
module: volume_image_metadata
short_description: Manage OpenStack Cinder volume image metadata
extends_documentation_fragment:
- openstack.cloud.openstack
description:
- Set image metadata on a Cinder volume.
- This maps to the Cinder C(os-set_image_metadata) API action.
- This is distinct from regular volume metadata.
options:
volume:
description:
- Volume ID or name.
required: true
type: str
image_metadata:
description:
- Image metadata to apply to the volume.
required: true
type: dict
author:
- Simon Dodsley (@simondodsley)
"""
EXAMPLES = r"""
- name: Apply volume image metadata
openstack.cloud.volume_image_metadata:
cloud: mycloud
volume: 9c6b7c8d-1234
image_metadata:
image_id: 2e1a...
disk_format: qcow2
container_format: bare
"""
RETURN = r"""
changed:
description: Whether the volume image metadata was changed.
returned: always
type: bool
volume:
description: Volume information.
returned: always
type: dict
"""
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule,
)
class VolumeImageMetadataModule(OpenStackModule):
argument_spec = dict(
volume=dict(required=True),
image_metadata=dict(type="dict", required=True),
)
module_kwargs = dict(
supports_check_mode=True,
)
def run(self):
volume_ref = self.params["volume"]
desired_meta = self.params["image_metadata"]
# Resolve volume
volume = self.conn.block_storage.find_volume(volume_ref, ignore_missing=False)
current_meta = volume.volume_image_metadata or {}
# Idempotency check
if desired_meta.items() <= current_meta.items():
self.exit_json(changed=False, volume=volume.to_dict())
if not self.ansible.check_mode:
self.conn.block_storage.set_volume_image_metadata(volume.id, **desired_meta)
volume = self.conn.block_storage.get_volume(volume.id)
self.exit_json(changed=True, volume=volume.to_dict())
def main():
module = VolumeImageMetadataModule()
module()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,309 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2025 by Pure Storage, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r"""
---
module: volume_manage
short_description: Manage/Unmanage Volumes
author: OpenStack Ansible SIG
description:
- Manage or Unmanage Volume in OpenStack.
options:
description:
description:
- String describing the volume
type: str
metadata:
description: Metadata for the volume
type: dict
name:
description:
- Name of the volume to be unmanaged or
the new name of a managed volume
- When I(state) is C(absent) this must be
the cinder volume ID
required: true
type: str
state:
description:
- Should the resource be present or absent.
choices: [present, absent]
default: present
type: str
bootable:
description:
- Bootable flag for volume.
type: bool
default: False
volume_type:
description:
- Volume type for volume
type: str
availability_zone:
description:
- The availability zone.
type: str
host:
description:
- Cinder host on which the existing volume resides
- Takes the form "host@backend-name#pool"
- Required when I(state) is C(present).
type: str
source_name:
description:
- Name of existing volume
type: str
source_id:
description:
- Identifier of existing volume
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
"""
RETURN = r"""
volume:
description: Cinder's representation of the volume object
returned: always
type: dict
contains:
attachments:
description: Instance attachment information. For a amanaged volume, this
will always be empty.
type: list
availability_zone:
description: The name of the availability zone.
type: str
consistency_group_id:
description: The UUID of the consistency group.
type: str
created_at:
description: The date and time when the resource was created.
type: str
description:
description: The volume description.
type: str
extended_replication_status:
description: Extended replication status on this volume.
type: str
group_id:
description: The ID of the group.
type: str
host:
description: The volume's current back-end.
type: str
id:
description: The UUID of the volume.
type: str
image_id:
description: Image on which the volume was based
type: str
is_bootable:
description: Enables or disables the bootable attribute. You can boot an
instance from a bootable volume.
type: str
is_encrypted:
description: If true, this volume is encrypted.
type: bool
is_multiattach:
description: Whether this volume can be attached to more than one
server.
type: bool
metadata:
description: A metadata object. Contains one or more metadata key and
value pairs that are associated with the volume.
type: dict
migration_id:
description: The volume ID that this volume name on the backend is
based on.
type: str
migration_status:
description: The status of this volume migration (None means that a
migration is not currently in progress).
type: str
name:
description: The volume name.
type: str
project_id:
description: The project ID which the volume belongs to.
type: str
replication_driver_data:
description: Data set by the replication driver
type: str
replication_status:
description: The volume replication status.
type: str
scheduler_hints:
description: Scheduler hints for the volume
type: dict
size:
description: The size of the volume, in gibibytes (GiB).
type: int
snapshot_id:
description: To create a volume from an existing snapshot, specify the
UUID of the volume snapshot. The volume is created in same
availability zone and with same size as the snapshot.
type: str
source_volume_id:
description: The UUID of the source volume. The API creates a new volume
with the same size as the source volume unless a larger size
is requested.
type: str
status:
description: The volume status.
type: str
updated_at:
description: The date and time when the resource was updated.
type: str
user_id:
description: The UUID of the user.
type: str
volume_image_metadata:
description: List of image metadata entries. Only included for volumes
that were created from an image, or from a snapshot of a
volume originally created from an image.
type: dict
volume_type:
description: The associated volume type name for the volume.
type: str
"""
EXAMPLES = r"""
- name: Manage volume
openstack.cloud.volume_manage:
name: newly-managed-vol
source_name: manage-me
host: host@backend-name#pool
- name: Unmanage volume
openstack.cloud.volume_manage:
name: "5c831866-3bb3-4d67-a7d3-1b90880c9d18"
state: absent
"""
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule,
)
class VolumeManageModule(OpenStackModule):
argument_spec = dict(
description=dict(type="str"),
metadata=dict(type="dict"),
source_name=dict(type="str"),
source_id=dict(type="str"),
availability_zone=dict(type="str"),
host=dict(type="str"),
bootable=dict(default="false", type="bool"),
volume_type=dict(type="str"),
name=dict(required=True, type="str"),
state=dict(
default="present", choices=["absent", "present"], type="str"
),
)
module_kwargs = dict(
required_if=[("state", "present", ["host"])],
supports_check_mode=True,
)
def run(self):
name = self.params["name"]
state = self.params["state"]
changed = False
if state == "present":
changed = True
if not self.ansible.check_mode:
volumes = self._manage_list()
manageable = volumes["manageable-volumes"]
safe_to_manage = self._is_safe_to_manage(
manageable, self.params["source_name"]
)
if not safe_to_manage:
self.exit_json(changed=False)
volume = self._manage()
if volume:
self.exit_json(
changed=changed, volume=volume.to_dict(computed=False)
)
else:
self.exit_json(changed=False)
else:
self.exit_json(changed=changed)
else:
volume = self.conn.block_storage.find_volume(name)
if volume:
changed = True
if not self.ansible.check_mode:
self._unmanage()
self.exit_json(changed=changed)
else:
self.exit_json(changed=changed)
def _is_safe_to_manage(self, manageable_list, target_name):
entry = next(
(
v
for v in manageable_list
if isinstance(v.get("reference"), dict)
and (
v["reference"].get("name") == target_name
or v["reference"].get("source-name") == target_name
)
),
None,
)
if entry is None:
return False
return entry.get("safe_to_manage", False)
def _manage(self):
kwargs = {
key: self.params[key]
for key in [
"description",
"bootable",
"volume_type",
"availability_zone",
"host",
"metadata",
"name",
]
if self.params.get(key) is not None
}
kwargs["ref"] = {}
if self.params["source_name"]:
kwargs["ref"]["source-name"] = self.params["source_name"]
if self.params["source_id"]:
kwargs["ref"]["source-id"] = self.params["source_id"]
volume = self.conn.block_storage.manage_volume(**kwargs)
return volume
def _manage_list(self):
response = self.conn.block_storage.get(
"/manageable_volumes?host=" + self.params["host"],
microversion="3.8",
)
response.raise_for_status()
manageable_volumes = response.json()
return manageable_volumes
def _unmanage(self):
self.conn.block_storage.unmanage_volume(self.params["name"])
def main():
module = VolumeManageModule()
module()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,7 @@
---
features:
- |
Added a new ``openstack.cloud.volume_image_metadata`` module to manage
Cinder volume image metadata via the ``os-set_image_metadata`` API.
This enables correct preservation of image provenance and boot semantics
for volumes, which cannot be achieved using regular volume metadata.

View File

@@ -0,0 +1,12 @@
ansible-core>=2.19.0,<2.20.0
flake8
galaxy-importer
openstacksdk
pycodestyle
pylint
rstcheck
ruamel.yaml
tox
voluptuous
yamllint
setuptools

View File

@@ -0,0 +1,12 @@
ansible-core>=2.20.0,<2.21.0
flake8
galaxy-importer
openstacksdk
pycodestyle
pylint
rstcheck
ruamel.yaml
tox
voluptuous
yamllint
setuptools

View File

@@ -0,0 +1,385 @@
import importlib.util
import json
import unittest
from pathlib import Path
from unittest import mock
from unittest.mock import patch
from ansible.module_utils import basic
from ansible.module_utils._text import to_bytes
def _load_module_under_test():
module_path = Path(__file__).resolve().parents[5] / 'plugins/modules/baremetal_port_group.py'
spec = importlib.util.spec_from_file_location('baremetal_port_group', str(module_path))
if spec is None or spec.loader is None:
raise ImportError('Cannot load baremetal_port_group module for tests')
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
return module
baremetal_port_group = _load_module_under_test()
def set_module_args(args):
if '_ansible_remote_tmp' not in args:
args['_ansible_remote_tmp'] = '/tmp'
if '_ansible_keep_remote_files' not in args:
args['_ansible_keep_remote_files'] = False
args = json.dumps({'ANSIBLE_MODULE_ARGS': args})
basic._ANSIBLE_ARGS = to_bytes(args)
class AnsibleExitJson(Exception):
pass
class AnsibleFailJson(Exception):
pass
def exit_json(*args, **kwargs):
if 'changed' not in kwargs:
kwargs['changed'] = False
raise AnsibleExitJson(kwargs)
def fail_json(*args, **kwargs):
kwargs['failed'] = True
raise AnsibleFailJson(kwargs)
class ModuleTestCase(unittest.TestCase):
mock_module = None
mock_sleep = None
def setUp(self):
self.mock_module = patch.multiple(
basic.AnsibleModule,
exit_json=exit_json,
fail_json=fail_json,
)
self.mock_module.start()
self.mock_sleep = patch('time.sleep')
self.mock_sleep.start()
set_module_args({})
self.addCleanup(self.mock_module.stop)
self.addCleanup(self.mock_sleep.stop)
class FakePortGroup(dict[str, object]):
def to_dict(self, computed=False):
return dict(self)
class FakeSDK(object):
class exceptions:
class OpenStackCloudException(Exception):
pass
class ResourceNotFound(Exception):
pass
class TestBaremetalPortGroup(ModuleTestCase):
module = baremetal_port_group
def setUp(self):
super(TestBaremetalPortGroup, self).setUp()
self.module = baremetal_port_group
def _run_module(self, module_args, baremetal):
set_module_args(module_args)
conn = mock.Mock()
conn.baremetal = baremetal
with mock.patch.object(
baremetal_port_group.BaremetalPortGroupModule,
'openstack_cloud_from_module',
return_value=(FakeSDK(), conn),
):
self.module.main()
def _new_baremetal(self):
baremetal = mock.Mock()
baremetal.find_port_group.return_value = None
baremetal.find_node.return_value = {'id': 'node-1'}
return baremetal
def test_create_port_group(self):
baremetal = self._new_baremetal()
baremetal.create_port_group.return_value = FakePortGroup(
id='pg-1',
name='bond0',
node_id='node-1',
address='fa:16:3e:aa:aa:aa',
mode='active-backup',
extra={},
properties={},
standalone_ports_supported=True,
links=[],
created_at='2026-01-01T00:00:00+00:00',
updated_at=None,
)
with self.assertRaises(AnsibleExitJson) as ex:
self._run_module(
{
'id': None,
'name': 'bond0',
'node': 'node-name',
'address': 'fa:16:3e:aa:aa:aa',
'extra': {},
'standalone_ports_supported': True,
'mode': 'active-backup',
'properties': {},
'state': 'present',
},
baremetal,
)
result = ex.exception.args[0]
self.assertTrue(result['changed'])
self.assertEqual('pg-1', result['port_group']['id'])
baremetal.find_node.assert_called_once_with('node-name', ignore_missing=False)
baremetal.create_port_group.assert_called_once_with(
name='bond0',
node_id='node-1',
address='fa:16:3e:aa:aa:aa',
extra={},
standalone_ports_supported=True,
mode='active-backup',
properties={},
)
def test_create_port_group_without_node_fails(self):
baremetal = self._new_baremetal()
with self.assertRaises(AnsibleFailJson) as ex:
self._run_module(
{
'id': None,
'name': 'bond0',
'node': None,
'address': None,
'extra': None,
'standalone_ports_supported': None,
'mode': None,
'properties': None,
'state': 'present',
},
baremetal,
)
self.assertIn("Parameter 'node' is required", ex.exception.args[0]['msg'])
baremetal.create_port_group.assert_not_called()
def test_update_port_group_when_values_changed(self):
baremetal = self._new_baremetal()
baremetal.find_port_group.return_value = FakePortGroup(
id='pg-1',
name='bond0',
node_id='node-1',
mode='active-backup',
address=None,
extra={},
properties={},
standalone_ports_supported=True,
links=[],
created_at='2026-01-01T00:00:00+00:00',
updated_at=None,
)
baremetal.update_port_group.return_value = FakePortGroup(
id='pg-1',
name='bond0',
node_id='node-1',
mode='802.3ad',
address=None,
extra={},
properties={},
standalone_ports_supported=True,
links=[],
created_at='2026-01-01T00:00:00+00:00',
updated_at='2026-01-02T00:00:00+00:00',
)
with self.assertRaises(AnsibleExitJson) as ex:
self._run_module(
{
'id': 'pg-1',
'name': None,
'node': None,
'address': None,
'extra': None,
'standalone_ports_supported': None,
'mode': '802.3ad',
'properties': None,
'state': 'present',
},
baremetal,
)
result = ex.exception.args[0]
self.assertTrue(result['changed'])
self.assertEqual('802.3ad', result['port_group']['mode'])
baremetal.update_port_group.assert_called_once_with('pg-1', mode='802.3ad')
def test_present_noop_when_already_matching(self):
baremetal = self._new_baremetal()
baremetal.find_port_group.return_value = FakePortGroup(
id='pg-1',
name='bond0',
node_id='node-1',
mode='active-backup',
address='fa:16:3e:aa:aa:aa',
extra={'a': 'b'},
properties={'miimon': '100'},
standalone_ports_supported=False,
links=[],
created_at='2026-01-01T00:00:00+00:00',
updated_at=None,
)
with self.assertRaises(AnsibleExitJson) as ex:
self._run_module(
{
'id': 'pg-1',
'name': 'bond0',
'node': None,
'address': 'fa:16:3e:aa:aa:aa',
'extra': {'a': 'b'},
'standalone_ports_supported': False,
'mode': 'active-backup',
'properties': {'miimon': '100'},
'state': 'present',
},
baremetal,
)
result = ex.exception.args[0]
self.assertFalse(result['changed'])
baremetal.update_port_group.assert_not_called()
def test_delete_existing_port_group(self):
baremetal = self._new_baremetal()
baremetal.find_port_group.return_value = FakePortGroup(id='pg-1', name='bond0')
with self.assertRaises(AnsibleExitJson) as ex:
self._run_module(
{
'id': 'pg-1',
'name': None,
'node': None,
'address': None,
'extra': None,
'standalone_ports_supported': None,
'mode': None,
'properties': None,
'state': 'absent',
},
baremetal,
)
result = ex.exception.args[0]
self.assertTrue(result['changed'])
baremetal.delete_port_group.assert_called_once_with('pg-1')
def test_delete_missing_port_group_is_noop(self):
baremetal = self._new_baremetal()
baremetal.find_port_group.return_value = None
with self.assertRaises(AnsibleExitJson) as ex:
self._run_module(
{
'id': 'pg-1',
'name': None,
'node': None,
'address': None,
'extra': None,
'standalone_ports_supported': None,
'mode': None,
'properties': None,
'state': 'absent',
},
baremetal,
)
result = ex.exception.args[0]
self.assertFalse(result['changed'])
baremetal.delete_port_group.assert_not_called()
def test_check_mode_create_marks_changed(self):
baremetal = self._new_baremetal()
baremetal.find_port_group.return_value = None
with self.assertRaises(AnsibleExitJson) as ex:
self._run_module(
{
'_ansible_check_mode': True,
'id': None,
'name': 'bond0',
'node': 'node-name',
'address': None,
'extra': None,
'standalone_ports_supported': None,
'mode': None,
'properties': None,
'state': 'present',
},
baremetal,
)
result = ex.exception.args[0]
self.assertTrue(result['changed'])
baremetal.create_port_group.assert_not_called()
baremetal.find_node.assert_called_once_with('node-name', ignore_missing=False)
def test_check_mode_create_without_node_fails(self):
baremetal = self._new_baremetal()
baremetal.find_port_group.return_value = None
with self.assertRaises(AnsibleFailJson) as ex:
self._run_module(
{
'_ansible_check_mode': True,
'id': None,
'name': 'bond0',
'node': None,
'address': None,
'extra': None,
'standalone_ports_supported': None,
'mode': None,
'properties': None,
'state': 'present',
},
baremetal,
)
self.assertIn("Parameter 'node' is required", ex.exception.args[0]['msg'])
baremetal.create_port_group.assert_not_called()
baremetal.find_node.assert_not_called()
def test_find_port_group_resource_not_found_returns_none(self):
baremetal = self._new_baremetal()
baremetal.find_port_group.side_effect = FakeSDK.exceptions.ResourceNotFound()
with self.assertRaises(AnsibleExitJson) as ex:
self._run_module(
{
'id': 'pg-1',
'name': None,
'node': None,
'address': None,
'extra': None,
'standalone_ports_supported': None,
'mode': None,
'properties': None,
'state': 'absent',
},
baremetal,
)
result = ex.exception.args[0]
self.assertFalse(result['changed'])

View File

@@ -24,7 +24,7 @@ echo "Running test with Python version ${PY_VER}"
rm -rf "${ANSIBLE_COLLECTIONS_PATH}"
mkdir -p ${ANSIBLE_COLLECTIONS_PATH}/ansible_collections/openstack/cloud
cp -a ${TOXDIR}/{plugins,meta,tests,docs} ${ANSIBLE_COLLECTIONS_PATH}/ansible_collections/openstack/cloud
cp -a ${TOXDIR}/{plugins,meta,tests,docs,galaxy.yml} ${ANSIBLE_COLLECTIONS_PATH}/ansible_collections/openstack/cloud
cd ${ANSIBLE_COLLECTIONS_PATH}/ansible_collections/openstack/cloud/
echo "Running ansible-test with version:"
ansible --version

20
tox.ini
View File

@@ -2,12 +2,10 @@
minversion = 3.18.0
envlist = linters_latest,ansible_latest
skipsdist = True
ignore_basepython_conflict = True
[testenv]
skip_install = True
install_command = python3 -m pip install {opts} {packages}
basepython = python3
passenv =
OS_*
setenv =
@@ -43,7 +41,7 @@ commands =
ansible-galaxy collection build --force {toxinidir} --output-path {toxinidir}/build_artifact
bash {toxinidir}/tools/check-import.sh {toxinidir}
[testenv:linters_{2_9,2_11,2_12,2_16,2_18,latest}]
[testenv:linters_{2_18,2_19,2_20,latest}]
allowlist_externals = bash
commands =
{[testenv:build]commands}
@@ -54,11 +52,9 @@ deps =
-c{env:TOX_CONSTRAINTS_FILE:{toxinidir}/tests/constraints-none.txt}
{[testenv:build]deps}
linters_latest: -r{toxinidir}/tests/requirements.txt
linters_2_9: -r{toxinidir}/tests/requirements-ansible-2.9.txt
linters_2_11: -r{toxinidir}/tests/requirements-ansible-2.11.txt
linters_2_12: -r{toxinidir}/tests/requirements-ansible-2.12.txt
linters_2_16: -r{toxinidir}/tests/requirements-ansible-2.16.txt
linters_2_16: -r{toxinidir}/tests/requirements-ansible-2.18.txt
linters_2_18: -r{toxinidir}/tests/requirements-ansible-2.18.txt
linters_2_19: -r{toxinidir}/tests/requirements-ansible-2.19.txt
linters_2_20: -r{toxinidir}/tests/requirements-ansible-2.20.txt
passenv = *
[flake8]
@@ -72,18 +68,16 @@ ignore = W503,H4,E501,E402,H301
show-source = True
exclude=.venv,.git,.tox,dist,doc,*lib/python*,*egg,build,ansible_collections
[testenv:ansible_{2_9,2_11,2_12,2_16,2_18,latest}]
[testenv:ansible_{2_18,2_19,2_20,latest}]
allowlist_externals = bash
commands =
bash {toxinidir}/ci/run-ansible-tests-collection.sh -e {envdir} {posargs}
deps =
-c{env:TOX_CONSTRAINTS_FILE:{toxinidir}/tests/constraints-none.txt}
ansible_latest: -r{toxinidir}/tests/requirements.txt
ansible_2_9: -r{toxinidir}/tests/requirements-ansible-2.9.txt
ansible_2_11: -r{toxinidir}/tests/requirements-ansible-2.11.txt
ansible_2_12: -r{toxinidir}/tests/requirements-ansible-2.12.txt
ansible_2_16: -r{toxinidir}/tests/requirements-ansible-2.16.txt
ansible_2_18: -r{toxinidir}/tests/requirements-ansible-2.18.txt
ansible_2_19: -r{toxinidir}/tests/requirements-ansible-2.19.txt
ansible_2_20: -r{toxinidir}/tests/requirements-ansible-2.20.txt
# Need to pass some env vars for the Ansible playbooks
passenv =
HOME