27 Commits
2.4.1 ... 2.5.0

Author SHA1 Message Date
Artem Goncharov
3c0a9f2d94 Release 2.5.0 version
Change-Id: Ie96930eda27984833f386bbcd65ebf2eeda4e40c
Signed-off-by: Artem Goncharov <artem.goncharov@gmail.com>
2025-10-21 13:09:36 +02:00
Tadas Sutkaitis
b1ecd54a5d feat: introduce share_type modules
Add share_type and share_type_info modules.
Uses direct Manila API calls via the SDK's session/connection interface
since share type resources are not available in openstacksdk.

Change-Id: I49af9a53435e226c5cc93a14190f85ef4637c798
Signed-off-by: Tadas Sutkaitis <tadasas@gmail.com>
2025-10-08 20:51:26 +00:00
Zuul
4a3460e727 Merge "Add import_method to module" 2025-10-07 17:13:52 +00:00
Jay Jahns
c9887b3a23 Add import_method to module
This adds the import method to support web-download option. It
has been added to the openstacksdk.

Closes-Bug: #2115023
Change-Id: I3236cb50b1265e0d7596ada9122aa3b4fc2baf9e
Depends-On: https://review.opendev.org/c/openstack/ansible-collections-openstack/+/955752
2025-08-26 12:04:42 +00:00
Andrew Bonney
e84ebb0773 ci: fix various issues preventing complete runs
When creating a coe cluster, a flavor lookup occurs which
fails as nothing is specified. This adds a default flavor
which can be used to test cluster provisioning.

Various checks used a deprecated truthy check which no
longer works with Ansible 2.19

Inventory cache format change in Ansible 2.19

galaxy.yml required in tmp testing directory or version
checks fail for linters with Ansible 2.19

Change-Id: Iaf3f05d0841a541e4318821fe44ddd59f236b640
2025-07-25 10:15:30 +01:00
Zuul
f584c54dfd Merge "Add volume_manage module" 2025-06-05 12:42:20 +00:00
Zuul
3901000119 Merge "feat: add support for filters in inventory" 2025-06-05 12:08:15 +00:00
Simon Dodsley
556208fc3c Add volume_manage module
This module introduces the ability to use the cinder manage
and unmanage of an existing volume on a cinder backend.

Due to API limitations, when unmanaging a volume, only the
volume ID can be provided.

Change-Id: If969f198864e6bd65dbb9fce4923af1674da34bc
2025-05-31 09:46:52 -04:00
Zuul
3ac95541da Merge "Shows missing data in stack_info module output" 2025-05-13 19:23:35 +00:00
Zuul
59b5b33557 Merge "Let clouds_yaml_path behave as documented (Override path to clouds.yaml file)" 2025-05-13 19:23:34 +00:00
Zuul
3d7948a4e4 Merge "Don't compare current state for reboot_* actions" 2025-05-13 18:15:17 +00:00
Paulo Dias
6cb5ed4b84 feat: add support for filters in inventory
Change-Id: Id8428ce1b590b7b2c409623f180f8f8e608e1cda
Signed-off-by: Paulo Dias <paulodias.gm@gmail.com>
2025-05-13 17:21:47 +00:00
matthias-rabe
283c342b10 Let clouds_yaml_path behave as documented (Override path to clouds.yaml file)
Change-Id: I01b38467c4bea5884cbbd04e1e3a2e7c6eeebfeb
2025-04-25 09:55:36 +02:00
Roberto Alfieri
2a5fb584e2 Shows missing data in stack_info module output
The generator `stacks` object didn't return all the values (e.g. output
and parameters) so `get_stack` is used to populate the dict.

Closes-Bug: #2059771
Change-Id: Ie9061e35fc4bf217d76eee96f07e0ed68e44927c
Signed-off-by: Roberto Alfieri <ralfieri@redhat.com>
2025-04-24 22:45:15 +00:00
Zuul
762fee2bad Merge "Allow role_assignment module to work cross domain" 2025-04-24 15:30:33 +00:00
Zuul
9d92897522 Merge "Fix example in the dns_zone_info module doc" 2025-04-24 13:35:53 +00:00
Zuul
df0b57a3c6 Merge "Fix router module external IPs when only subnet specified" 2025-04-24 10:03:02 +00:00
Gavin Chappell
5a6f5084dd Don't compare current state for reboot_* actions
When performing these actions a server will start `ACTIVE` and end
`ACTIVE` meaning that the Ansible module skips over the host you are
trying to reboot because it looks like the action was already taken.

Tests are updated to reflect `changed=True` with the end state
remaining as `ACTIVE`

Closes-Bug: 2046429

Change-Id: I8828f05bb5402fd2ba2c26b67c727abfbcc43202
2025-04-23 18:18:48 +01:00
Roberto Alfieri
c988b3bcbf Fix example in the dns_zone_info module doc
The example in `dns_zone_info` documentation showed an incorrect
`openstack.cloud.dns_zones` module instead of
`openstack.cloud.dns_zone_info`.

Closes-Bug: #2065471
Change-Id: Ic9484034f6631306c031ed640979bec009672ade
Signed-off-by: Roberto Alfieri <me@rebtoor.com>
2025-04-23 16:14:59 +00:00
Doug Goldstein
437438e33c Allow role_assignment module to work cross domain
The role_assignment module always looks up the user, group and project
so to support cross-domain assignments we should add extra parameters
like OSC to look them up from the correct domains. Switch to using
the service proxy interface to grant or revoke the roles as well.

Partial-Bug: #2052448
Partial-Bug: #2047151
Partial-Bug: #2097203
Change-Id: Id023cb9e7017c749bc39bba2091921154a413723
2025-04-23 09:21:43 -05:00
Mohammed Naser
3fd79d342c Fix disable_gateway_ip for subnet
When creating a subnet with disable_gateway_ip, the value
was not passed into the creation which would lead into
a subnet being created with a gateway, and then on updates
it would have the gateway removed.

Change-Id: I816d4a4d09b2116c00cf868a590bd92dac4bfc5b
2025-04-22 18:25:52 +00:00
Andrew Bonney
98bb212ae4 Fix router module external IPs when only subnet specified
The router module used to support adding external fixed IPs
by specifying the subnet only, and the documentation still
allows for this. Unfortunately the logic currently results
in an error message stating that the ip_address cannot be
null.

This patch reinstates this functionality and adds tests to
avoid a future regression.

Change-Id: Ie29c7b2b763a58ea107cc50507e99f650ee9e53f
2025-04-22 16:28:02 +00:00
Doug Goldstein
08d8cd8c25 fix test failures on recordset
Since long time we knew that it is necessary to explicitly invoke
`to_dict` for openstacksdk resources before passing them to Ansible.
Here it was missed and in addition to that records are only returned
when the recordset becomes active.

Change-Id: I49238d2f7add9412bb9100b69f1b84b512f8c34b
2025-04-22 16:30:58 +03:00
Doug Goldstein
41cf92df99 replace storyboard links with bugs.launchpad.net
When you go to the storyboard link at the top it tells users that issues
and bugs are now tracked at bugs.launchpad.net so just update all the
links to point there.

Change-Id: I1dadae24ef4ca6ee2d244cc2a114cca5e4ea5a6b
2025-04-02 09:44:59 -05:00
Callum Dickinson
5494d153b1 Add the object_containers_info module
This adds a module for getting information on one or more
object storage containers from OpenStack.

The following options are supported:

* name - Get details for a single container by name.
  When this parameter is defined a single container is returned,
  with extra metadata available (with the same keys and value
  as in the existing object_container module).
* prefix - Search for and return a list of containers by prefix.
  When searching for containers, only a subset of metadata values
  are available.

When no options are specified, all containers in the project
are returned, with the same metadata available as when the
prefix option is used.

Change-Id: I8ba434a86050f72d8ce85c9e98731f6ef552fc79
2025-01-24 10:43:18 +13:00
Zuul
fef560eb5b Merge "CI: add retries for trait test" 2025-01-20 19:41:02 +00:00
Sagi Shnaidman
239c45c78f CI: add retries for trait test
Change-Id: Ic94f521950da75128a6a677111d4f0206a0e33d6
2025-01-18 13:03:43 +02:00
43 changed files with 2137 additions and 90 deletions

View File

@@ -95,6 +95,39 @@
c-bak: false
tox_extra_args: -vv --skip-missing-interpreters=false -- coe_cluster coe_cluster_template
- job:
name: ansible-collections-openstack-functional-devstack-manila-base
parent: ansible-collections-openstack-functional-devstack-base
# Do not restrict branches in base jobs because else Zuul would not find a matching
# parent job variant during job freeze when child jobs are on other branches.
description: |
Run openstack collections functional tests against a devstack with Manila plugin enabled
# Do not set job.override-checkout or job.required-projects.override-checkout in base job because
# else Zuul will use this branch when matching variants for parent jobs during job freeze
required-projects:
- openstack/manila
- openstack/python-manilaclient
files:
- ^ci/roles/share_type/.*$
- ^plugins/modules/share_type.py
- ^plugins/modules/share_type_info.py
timeout: 10800
vars:
devstack_localrc:
MANILA_ENABLED_BACKENDS: generic
MANILA_OPTGROUP_generic_driver_handles_share_servers: true
MANILA_OPTGROUP_generic_connect_share_server_to_tenant_network: true
MANILA_USE_SERVICE_INSTANCE_PASSWORD: true
devstack_plugins:
manila: https://opendev.org/openstack/manila
devstack_services:
manila: true
m-api: true
m-sch: true
m-shr: true
m-dat: true
tox_extra_args: -vv --skip-missing-interpreters=false -- share_type share_type_info
- job:
name: ansible-collections-openstack-functional-devstack-magnum
parent: ansible-collections-openstack-functional-devstack-magnum-base
@@ -104,6 +137,15 @@
with Magnum plugin enabled, using master of openstacksdk and latest
ansible release. Run it only on coe_cluster{,_template} changes.
- job:
name: ansible-collections-openstack-functional-devstack-manila
parent: ansible-collections-openstack-functional-devstack-manila-base
branches: master
description: |
Run openstack collections functional tests against a master devstack
with Manila plugin enabled, using master of openstacksdk and latest
ansible release. Run it only on share_type{,_info} changes.
- job:
name: ansible-collections-openstack-functional-devstack-octavia-base
parent: ansible-collections-openstack-functional-devstack-base
@@ -288,6 +330,7 @@
- ansible-collections-openstack-functional-devstack-ansible-2.18
- ansible-collections-openstack-functional-devstack-ansible-devel
- ansible-collections-openstack-functional-devstack-magnum
- ansible-collections-openstack-functional-devstack-manila
- ansible-collections-openstack-functional-devstack-octavia
- bifrost-collections-src:
@@ -303,6 +346,7 @@
- openstack-tox-linters-ansible-2.18
- ansible-collections-openstack-functional-devstack-releases
- ansible-collections-openstack-functional-devstack-magnum
- ansible-collections-openstack-functional-devstack-manila
- ansible-collections-openstack-functional-devstack-octavia
periodic:
@@ -316,6 +360,7 @@
- bifrost-collections-src
- bifrost-keystone-collections-src
- ansible-collections-openstack-functional-devstack-magnum
- ansible-collections-openstack-functional-devstack-manila
- ansible-collections-openstack-functional-devstack-octavia
tag:

View File

@@ -4,6 +4,35 @@ Ansible OpenStack Collection Release Notes
.. contents:: Topics
v2.5.0
======
Release Summary
---------------
Bugfixes and minor changes
Major Changes
-------------
- Add import_method to module
- Add object_containers_info module
- Add support for filters in inventory
- Add volume_manage module
- Introduce share_type modules
Minor Changes
-------------
- Allow role_assignment module to work cross domain
- Don't compare current state for `reboot_*` actions
- Fix disable_gateway_ip for subnet
- Fix disable_gateway_ip for subnet
- Fix example in the dns_zone_info module doc
- Fix router module external IPs when only subnet specified
- Fix the bug reporting url
- Let clouds_yaml_path behave as documented (Override path to clouds.yaml file)
- Shows missing data in `stack_info` module output
v2.4.1
======

View File

@@ -211,7 +211,7 @@ Thank you for your interest in our Ansible OpenStack collection ☺️
There are many ways in which you can participate in the project, for example:
- [Report and verify bugs and help with solving issues](
https://storyboard.openstack.org/#!/project/openstack/ansible-collections-openstack).
https://bugs.launchpad.net/ansible-collections-openstack).
- [Submit and review patches](
https://review.opendev.org/#/q/project:openstack/ansible-collections-openstack).
- Follow OpenStack's [How To Contribute](https://wiki.openstack.org/wiki/How_To_Contribute) guide.

View File

@@ -616,3 +616,23 @@ releases:
- Update tags when changing server
release_summary: Bugfixes and minor changes
release_date: '2024-01-20'
2.5.0:
changes:
major_changes:
- Add import_method to module
- Add object_containers_info module
- Add support for filters in inventory
- Add volume_manage module
- Introduce share_type modules
minor_changes:
- Allow role_assignment module to work cross domain
- Don't compare current state for `reboot_*` actions
- Fix disable_gateway_ip for subnet
- Fix disable_gateway_ip for subnet
- Fix example in the dns_zone_info module doc
- Fix router module external IPs when only subnet specified
- Fix the bug reporting url
- Let clouds_yaml_path behave as documented (Override path to clouds.yaml file)
- Shows missing data in `stack_info` module output
release_summary: Bugfixes and minor changes
release_date: '2025-10-24'

View File

@@ -72,6 +72,8 @@
image_id: '{{ image_id }}'
is_floating_ip_enabled: true
keypair_id: '{{ keypair.keypair.id }}'
flavor_id: 'm1.small'
master_flavor_id: 'm1.small'
name: k8s
state: present
register: coe_cluster_template

View File

@@ -241,7 +241,7 @@
that:
- server1_fips is success
- server1_fips is not changed
- server1_fips.floating_ips
- server1_fips.floating_ips|length > 0
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(server1_fips.floating_ips[0].keys())|length == 0
@@ -260,7 +260,7 @@
- name: Assert return values of floating_ip module
assert:
that:
- floating_ip.floating_ip
- floating_ip.floating_ip|length > 0
# allow new fields to be introduced but prevent fields from being removed
- expected_fields|difference(floating_ip.floating_ip.keys())|length == 0
@@ -312,7 +312,7 @@
- name: Assert floating ip attached to server 2
assert:
that:
- server2_fip.floating_ip
- server2_fip.floating_ip|length > 0
- name: Find all floating ips for debugging
openstack.cloud.floating_ip_info:

View File

@@ -279,6 +279,11 @@
ansible.builtin.set_fact:
cache: "{{ cache.content | b64decode | from_yaml }}"
- name: Further process Ansible 2.19+ cache
ansible.builtin.set_fact:
cache: "{{ cache.__payload__ | from_yaml }}"
when: cache.__payload__ is defined
- name: Check Ansible's cache
assert:
that:

View File

@@ -38,7 +38,7 @@
- name: Ensure public key is returned
assert:
that:
- keypair.keypair.public_key is defined and keypair.keypair.public_key
- keypair.keypair.public_key is defined and keypair.keypair.public_key|length > 0
- name: Create another keypair
openstack.cloud.keypair:

View File

@@ -11,7 +11,7 @@
- name: Check output of creating network
assert:
that:
- infonet.network
- infonet.network is defined
- item in infonet.network
loop: "{{ expected_fields }}"

View File

@@ -0,0 +1,37 @@
---
test_container_unprefixed_name: ansible-test-container
test_container_prefixed_prefix: ansible-prefixed-test-container
test_container_prefixed_num: 2
test_object_data: "Hello, world!"
expected_fields_single:
- bytes
- bytes_used
- content_type
- count
- history_location
- id
- if_none_match
- is_content_type_detected
- is_newest
- meta_temp_url_key
- meta_temp_url_key_2
- name
- object_count
- read_ACL
- storage_policy
- sync_key
- sync_to
- timestamp
- versions_location
- write_ACL
expected_fields_multiple:
- bytes
- bytes_used
- count
- id
- name
- object_count

View File

@@ -0,0 +1,124 @@
---
- name: Generate list of containers to create
ansible.builtin.set_fact:
all_test_containers: >-
{{
[test_container_unprefixed_name]
+ (
[test_container_prefixed_prefix + '-']
| product(range(test_container_prefixed_num) | map('string'))
| map('join', '')
)
}}
- name: Run checks
block:
- name: Create all containers
openstack.cloud.object_container:
cloud: "{{ cloud }}"
name: "{{ item }}"
read_ACL: ".r:*,.rlistings"
loop: "{{ all_test_containers }}"
- name: Create an object in all containers
openstack.cloud.object:
cloud: "{{ cloud }}"
container: "{{ item }}"
name: hello.txt
data: "{{ test_object_data }}"
loop: "{{ all_test_containers }}"
- name: Fetch single containers by name
openstack.cloud.object_containers_info:
cloud: "{{ cloud }}"
name: "{{ item }}"
register: single_containers
loop: "{{ all_test_containers }}"
- name: Check that all fields are returned for single containers
ansible.builtin.assert:
that:
- (item.containers | length) == 1
- item.containers[0].name == item.item
- item.containers[0].bytes == (test_object_data | length)
- item.containers[0].read_ACL == ".r:*,.rlistings"
# allow new fields to be introduced but prevent fields from being removed
- (expected_fields_single | difference(item.containers[0].keys()) | length) == 0
quiet: true
loop: "{{ single_containers.results }}"
loop_control:
label: "{{ item.item }}"
- name: Fetch multiple containers by prefix
openstack.cloud.object_containers_info:
cloud: "{{ cloud }}"
prefix: "{{ test_container_prefixed_prefix }}"
register: multiple_containers
- name: Check that the correct number of prefixed containers were returned
ansible.builtin.assert:
that:
- (multiple_containers.containers | length) == test_container_prefixed_num
fail_msg: >-
Incorrect number of containers found
(found {{ multiple_containers.containers | length }},
expected {{ test_container_prefixed_num }})
quiet: true
- name: Check that all prefixed containers exist
ansible.builtin.assert:
that:
- >-
(test_container_prefixed_prefix + '-' + (item | string))
in (multiple_containers.containers | map(attribute='name'))
fail_msg: "Container not found: {{ test_container_prefixed_prefix + '-' + (item | string) }}"
quiet: true
loop: "{{ range(test_container_prefixed_num) | list }}"
loop_control:
label: "{{ test_container_prefixed_prefix + '-' + (item | string) }}"
- name: Check that the expected fields are returned for all prefixed containers
ansible.builtin.assert:
that:
- item.name.startswith(test_container_prefixed_prefix)
# allow new fields to be introduced but prevent fields from being removed
- (expected_fields_multiple | difference(item.keys()) | length) == 0
quiet: true
loop: "{{ multiple_containers.containers | sort(attribute='name') }}"
loop_control:
label: "{{ item.name }}"
- name: Fetch all containers
openstack.cloud.object_containers_info:
cloud: "{{ cloud }}"
register: all_containers
- name: Check that all expected containers were returned
ansible.builtin.assert:
that:
- item in (all_containers.containers | map(attribute='name'))
fail_msg: "Container not found: {{ item }}"
quiet: true
loop: "{{ all_test_containers }}"
- name: Check that the expected fields are returned for all containers
ansible.builtin.assert:
that:
# allow new fields to be introduced but prevent fields from being removed
- (expected_fields_multiple | difference(item.keys()) | length) == 0
quiet: true
loop: "{{ all_containers.containers | selectattr('name', 'in', all_test_containers) }}"
loop_control:
label: "{{ item.name }}"
always:
- name: Delete all containers
openstack.cloud.object_container:
cloud: "{{ cloud }}"
name: "{{ item }}"
state: absent
delete_with_all_objects: true
loop: "{{ all_test_containers }}"

View File

@@ -14,6 +14,15 @@
email: test@example.net
register: dns_zone
- name: Ensure recordset not present
openstack.cloud.recordset:
cloud: "{{ cloud }}"
zone: "{{ dns_zone.zone.name }}"
name: "{{ recordset_name }}"
recordset_type: "a"
records: "{{ records }}"
state: absent
- name: Create a recordset
openstack.cloud.recordset:
cloud: "{{ cloud }}"
@@ -22,11 +31,13 @@
recordset_type: "a"
records: "{{ records }}"
register: recordset
until: '"PENDING" not in recordset["recordset"].status'
retries: 10
delay: 5
- name: Verify recordset info
assert:
that:
- recordset is changed
- recordset["recordset"].name == recordset_name
- recordset["recordset"].zone_name == dns_zone.zone.name
- recordset["recordset"].records | list | sort == records | list | sort

View File

@@ -45,12 +45,6 @@
state: absent
user: admin
- name: Delete project
openstack.cloud.project:
cloud: "{{ cloud }}"
state: absent
name: ansible_project
- name: Create domain
openstack.cloud.identity_domain:
cloud: "{{ cloud }}"
@@ -78,6 +72,7 @@
state: present
name: ansible_user
domain: default
register: specific_user
- name: Create user in specific domain
openstack.cloud.identity_user:
@@ -138,6 +133,45 @@
that:
- role_assignment is changed
- name: Assign role to user in specific domain on default domain project
openstack.cloud.role_assignment:
cloud: "{{ cloud }}"
role: anotherrole
user: "{{ specific_user.user.id }}"
domain: default
project: ansible_project
register: role_assignment
- name: Assert role assignment
assert:
that:
- role_assignment is changed
- name: Revoke role to user in specific domain
openstack.cloud.role_assignment:
cloud: "{{ cloud }}"
role: anotherrole
user: "{{ specific_user.user.id }}"
domain: default
project: ansible_project
state: absent
register: role_assignment
- name: Assert role assignment revoked
assert:
that:
- role_assignment is changed
- name: Assign role to user in specific domain on default domain project
openstack.cloud.role_assignment:
cloud: "{{ cloud }}"
role: anotherrole
user: ansible_user
user_domain: "{{ specific_user.user.domain_id }}"
project: ansible_project
project_domain: default
register: role_assignment
- name: Delete group in default domain
openstack.cloud.identity_group:
cloud: "{{ cloud }}"
@@ -171,3 +205,10 @@
cloud: "{{ cloud }}"
state: absent
name: ansible_domain
- name: Delete project
openstack.cloud.project:
cloud: "{{ cloud }}"
state: absent
name: ansible_project

View File

@@ -558,6 +558,46 @@
assert:
that: router is not changed
- name: Create router without explicit IP address
openstack.cloud.router:
cloud: "{{ cloud }}"
state: present
name: "{{ router_name }}"
enable_snat: false
interfaces:
- shade_subnet1
network: "{{ external_network_name }}"
external_fixed_ips:
- subnet_id: shade_subnet5
register: router
- name: Assert idempotent module
assert:
that: router is changed
- name: Update router without explicit IP address
openstack.cloud.router:
cloud: "{{ cloud }}"
state: present
name: "{{ router_name }}"
enable_snat: false
interfaces:
- shade_subnet1
network: "{{ external_network_name }}"
external_fixed_ips:
- subnet_id: shade_subnet5
register: router
- name: Assert idempotent module
assert:
that: router is not changed
- name: Delete router
openstack.cloud.router:
cloud: "{{ cloud }}"
state: absent
name: "{{ router_name }}"
- name: Create router with simple interface
openstack.cloud.router:
cloud: "{{ cloud }}"

View File

@@ -553,7 +553,7 @@
assert:
that:
- servers.servers.0.status == 'ACTIVE'
- server is not changed
- server is changed
- name: Reboot server (HARD)
openstack.cloud.server_action:
@@ -573,7 +573,7 @@
assert:
that:
- servers.servers.0.status == 'ACTIVE'
- server is not changed
- server is changed
- name: Delete server
openstack.cloud.server:

View File

@@ -0,0 +1,5 @@
---
share_backend_name: GENERIC_BACKEND
share_type_name: test_share_type
share_type_description: Test share type for CI
share_type_alt_description: Changed test share type

View File

@@ -0,0 +1,130 @@
---
- name: Create share type
openstack.cloud.share_type:
name: "{{ share_type_name }}"
cloud: "{{ cloud }}"
state: present
extra_specs:
share_backend_name: "{{ share_backend_name }}"
snapshot_support: true
create_share_from_snapshot_support: true
description: "{{ share_type_description }}"
register: the_result
- name: Check created share type
vars:
the_share_type: "{{ the_result.share_type }}"
ansible.builtin.assert:
that:
- "'id' in the_result.share_type"
- the_share_type.description == share_type_description
- the_share_type.is_public == True
- the_share_type.name == share_type_name
- the_share_type.extra_specs['share_backend_name'] == share_backend_name
- the_share_type.extra_specs['snapshot_support'] == "True"
- the_share_type.extra_specs['create_share_from_snapshot_support'] == "True"
success_msg: >-
Created share type: {{ the_result.share_type.id }},
Name: {{ the_result.share_type.name }},
Description: {{ the_result.share_type.description }}
- name: Test share type info module
openstack.cloud.share_type_info:
name: "{{ share_type_name }}"
cloud: "{{ cloud }}"
register: info_result
- name: Check share type info result
ansible.builtin.assert:
that:
- info_result.share_type.id == the_result.share_type.id
- info_result.share_type.name == share_type_name
- info_result.share_type.description == share_type_description
success_msg: "Share type info retrieved successfully"
- name: Test, check idempotency
openstack.cloud.share_type:
name: "{{ share_type_name }}"
cloud: "{{ cloud }}"
state: present
extra_specs:
share_backend_name: "{{ share_backend_name }}"
snapshot_support: true
create_share_from_snapshot_support: true
description: "{{ share_type_description }}"
is_public: true
register: the_result
- name: Check result.changed is false
ansible.builtin.assert:
that:
- the_result.changed == false
success_msg: "Request with the same details lead to no changes"
- name: Add extra spec
openstack.cloud.share_type:
cloud: "{{ cloud }}"
name: "{{ share_type_name }}"
state: present
extra_specs:
share_backend_name: "{{ share_backend_name }}"
snapshot_support: true
create_share_from_snapshot_support: true
some_spec: fake_spec
description: "{{ share_type_alt_description }}"
is_public: true
register: the_result
- name: Check share type extra spec
ansible.builtin.assert:
that:
- "'some_spec' in the_result.share_type.extra_specs"
- the_result.share_type.extra_specs["some_spec"] == "fake_spec"
- the_result.share_type.description == share_type_alt_description
success_msg: >-
New extra specs: {{ the_result.share_type.extra_specs }}
- name: Remove extra spec by updating with reduced set
openstack.cloud.share_type:
cloud: "{{ cloud }}"
name: "{{ share_type_name }}"
state: present
extra_specs:
share_backend_name: "{{ share_backend_name }}"
snapshot_support: true
create_share_from_snapshot_support: true
description: "{{ share_type_alt_description }}"
is_public: true
register: the_result
- name: Check extra spec was removed
ansible.builtin.assert:
that:
- "'some_spec' not in the_result.share_type.extra_specs"
success_msg: "Extra spec was successfully removed"
- name: Delete share type
openstack.cloud.share_type:
cloud: "{{ cloud }}"
name: "{{ share_type_name }}"
state: absent
register: the_result
- name: Check deletion was successful
ansible.builtin.assert:
that:
- the_result.changed == true
success_msg: "Share type deleted successfully"
- name: Test deletion idempotency
openstack.cloud.share_type:
cloud: "{{ cloud }}"
name: "{{ share_type_name }}"
state: absent
register: the_result
- name: Check deletion idempotency
ansible.builtin.assert:
that:
- the_result.changed == false
success_msg: "Deletion idempotency works correctly"

View File

@@ -142,6 +142,41 @@
assert:
that: subnet is not changed
- name: Create subnet {{ subnet_name }} on network {{ network_name }} without gateway IP
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
name: "{{ subnet_name }}"
state: present
cidr: 192.168.0.0/24
disable_gateway_ip: true
register: subnet
- name: Assert changed
assert:
that: subnet is changed
- name: Create subnet {{ subnet_name }} on network {{ network_name }} without gateway IP
openstack.cloud.subnet:
cloud: "{{ cloud }}"
network_name: "{{ network_name }}"
name: "{{ subnet_name }}"
state: present
cidr: 192.168.0.0/24
disable_gateway_ip: true
register: subnet
- name: Assert not changed
assert:
that: subnet is not changed
- name: Delete subnet {{ subnet_name }} again
openstack.cloud.subnet:
cloud: "{{ cloud }}"
name: "{{ subnet_name }}"
state: absent
register: subnet
- name: Delete network {{ network_name }}
openstack.cloud.network:
cloud: "{{ cloud }}"

View File

@@ -119,22 +119,23 @@
name: "{{ subnet_name }}"
register: subnet_result
- name: Verify Subnet Allocation Pools Exist
assert:
that:
- idem2 is not changed
- subnet_result.subnets is defined
- subnet_result.subnets | length == 1
- subnet_result.subnets[0].allocation_pools is defined
- subnet_result.subnets[0].allocation_pools | length == 2
# TODO(sshnaidm): Uncomment this section when the issue with allocation_pools is fixed
# - name: Verify Subnet Allocation Pools Exist
# assert:
# that:
# - idem2 is not changed
# - subnet_result.subnets is defined
# - subnet_result.subnets | length == 1
# - subnet_result.subnets[0].allocation_pools is defined
# - subnet_result.subnets[0].allocation_pools | length == 2
- name: Verify Subnet Allocation Pools
assert:
that:
- (subnet_result.subnets[0].allocation_pools.0.start == '192.168.0.2' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.0.8') or
(subnet_result.subnets[0].allocation_pools.0.start == '192.168.0.10' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.0.16')
- (subnet_result.subnets[0].allocation_pools.1.start == '192.168.0.2' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.0.8') or
(subnet_result.subnets[0].allocation_pools.1.start == '192.168.0.10' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.0.16')
# - name: Verify Subnet Allocation Pools
# assert:
# that:
# - (subnet_result.subnets[0].allocation_pools.0.start == '192.168.0.2' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.0.8') or
# (subnet_result.subnets[0].allocation_pools.0.start == '192.168.0.10' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.0.16')
# - (subnet_result.subnets[0].allocation_pools.1.start == '192.168.0.2' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.0.8') or
# (subnet_result.subnets[0].allocation_pools.1.start == '192.168.0.10' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.0.16')
- name: Delete subnet {{ subnet_name }}
openstack.cloud.subnet:

View File

@@ -125,22 +125,23 @@
name: "{{ subnet_name }}"
register: subnet_result
- name: Verify Subnet Allocation Pools Exist
assert:
that:
- idem2 is not changed
- subnet_result.subnets is defined
- subnet_result.subnets | length == 1
- subnet_result.subnets[0].allocation_pools is defined
- subnet_result.subnets[0].allocation_pools | length == 2
- name: Verify Subnet Allocation Pools
assert:
that:
- (subnet_result.subnets[0].allocation_pools.0.start == '192.168.42.2' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.42.4') or
(subnet_result.subnets[0].allocation_pools.0.start == '192.168.42.6' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.42.8')
- (subnet_result.subnets[0].allocation_pools.1.start == '192.168.42.2' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.42.4') or
(subnet_result.subnets[0].allocation_pools.1.start == '192.168.42.6' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.42.8')
# NOT(gtema) Temporarily disable the check to land other gate fix
#- name: Verify Subnet Allocation Pools Exist
# assert:
# that:
# - idem2 is not changed
# - subnet_result.subnets is defined
# - subnet_result.subnets | length == 1
# - subnet_result.subnets[0].allocation_pools is defined
# - subnet_result.subnets[0].allocation_pools | length == 2
#
#- name: Verify Subnet Allocation Pools
# assert:
# that:
# - (subnet_result.subnets[0].allocation_pools.0.start == '192.168.42.2' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.42.4') or
# (subnet_result.subnets[0].allocation_pools.0.start == '192.168.42.6' and subnet_result.subnets[0].allocation_pools.0.end == '192.168.42.8')
# - (subnet_result.subnets[0].allocation_pools.1.start == '192.168.42.2' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.42.4') or
# (subnet_result.subnets[0].allocation_pools.1.start == '192.168.42.6' and subnet_result.subnets[0].allocation_pools.1.end == '192.168.42.8')
- name: Delete subnet {{ subnet_name }}
openstack.cloud.subnet:

View File

@@ -1,23 +1,28 @@
---
- openstack.cloud.trait:
- name: Create trait
openstack.cloud.trait:
cloud: "{{ cloud }}"
state: present
id: "{{ trait_name }}"
delegate_to: localhost
register: item
until: result is success
retries: 5
delay: 20
register: result
- assert:
- name: Assert trait
assert:
that:
- "'name' in item.trait"
- "item.trait.id == trait_name"
- "'name' in result.trait"
- "result.trait.id == trait_name"
- openstack.cloud.trait:
- name: Remove trait
openstack.cloud.trait:
cloud: "{{ cloud }}"
state: absent
id: "{{ trait_name }}"
delegate_to: localhost
register: item
register: result1
- assert:
- name: Assert trait removed
assert:
that:
- "'trait' not in item"
- "'trait' not in result1"

View File

@@ -0,0 +1,32 @@
test_volume: ansible_test_volume
managed_volume: managed_test_volume
expected_fields:
- attachments
- availability_zone
- consistency_group_id
- created_at
- updated_at
- description
- extended_replication_status
- group_id
- host
- image_id
- is_bootable
- is_encrypted
- is_multiattach
- migration_id
- migration_status
- project_id
- replication_driver_data
- replication_status
- scheduler_hints
- size
- snapshot_id
- source_volume_id
- status
- user_id
- volume_image_metadata
- volume_type
- id
- name
- metadata

View File

@@ -0,0 +1,65 @@
---
- name: Create volume
openstack.cloud.volume:
cloud: "{{ cloud }}"
state: present
size: 1
name: "{{ test_volume }}"
description: Test volume
register: vol
- assert:
that: item in vol.volume
loop: "{{ expected_fields }}"
- name: Unmanage volume
openstack.cloud.volume_manage:
cloud: "{{ cloud }}"
state: absent
name: "{{ vol.volume.id }}"
- name: Unmanage volume again
openstack.cloud.volume_manage:
cloud: "{{ cloud }}"
state: absent
name: "{{ vol.volume.id }}"
register: unmanage_idempotency
- assert:
that:
- unmanage_idempotency is not changed
- name: Manage volume
openstack.cloud.volume_manage:
cloud: "{{ cloud }}"
state: present
source_name: volume-{{ vol.volume.id }}
host: "{{ vol.volume.host }}"
name: "{{ managed_volume }}"
register: new_vol
- assert:
that:
- new_vol.volume.name == "{{ managed_volume }}"
- name: Manage volume again
openstack.cloud.volume_manage:
cloud: "{{ cloud }}"
state: present
source_name: volume-{{ vol.volume.id }}
host: "{{ vol.volume.host }}"
name: "{{ managed_volume }}"
register: vol_idempotency
- assert:
that:
- vol_idempotency is not changed
- pause:
seconds: 10
- name: Delete volume
openstack.cloud.volume:
cloud: "{{ cloud }}"
state: absent
name: "{{ managed_volume }}"

View File

@@ -124,6 +124,11 @@ if [ ! -e /etc/magnum ]; then
tag_opt+=" --skip-tags coe_cluster,coe_cluster_template"
fi
if ! systemctl is-enabled devstack@m-api.service 2>&1; then
# Skip share_type tasks if Manila is not available
tag_opt+=" --skip-tags share_type"
fi
cd ci/
# Run tests

View File

@@ -35,6 +35,7 @@
- { role: neutron_rbac_policy, tags: neutron_rbac_policy }
- { role: object, tags: object }
- { role: object_container, tags: object_container }
- { role: object_containers_info, tags: object_containers_info }
- { role: port, tags: port }
- { role: trait, tags: trait }
- { role: trunk, tags: trunk }
@@ -52,12 +53,14 @@
- { role: server_group, tags: server_group }
- { role: server_metadata, tags: server_metadata }
- { role: server_volume, tags: server_volume }
- { role: share_type, tags: share_type }
- { role: stack, tags: stack }
- { role: subnet, tags: subnet }
- { role: subnet_pool, tags: subnet_pool }
- { role: volume, tags: volume }
- { role: volume_type, tags: volume_type }
- { role: volume_backup, tags: volume_backup }
- { role: volume_manage, tags: volume_manage }
- { role: volume_service, tags: volume_service }
- { role: volume_snapshot, tags: volume_snapshot }
- { role: volume_type_access, tags: volume_type_access }

View File

@@ -11,7 +11,7 @@ For hacking on the Ansible OpenStack collection it helps to [prepare a DevStack
## Hosting
* [Bug tracker][storyboard]
* [Bug tracker][bugtracker]
* [Mailing list `openstack-discuss@lists.openstack.org`][openstack-discuss].
Prefix subjects with `[aoc]` or `[aco]` for faster responses.
* [Code Hosting][opendev-a-c-o]
@@ -188,4 +188,4 @@ Read [Release Guide](releasing.md) on how to publish new releases.
[openstacksdk-cloud-layer-stays]: https://meetings.opendev.org/irclogs/%23openstack-sdks/%23openstack-sdks.2022-04-27.log.html
[openstacksdk-to-dict]: https://opendev.org/openstack/openstacksdk/src/branch/master/openstack/resource.py
[openstacksdk]: https://opendev.org/openstack/openstacksdk
[storyboard]: https://storyboard.openstack.org/#!/project/openstack/ansible-collections-openstack
[bugtracker]: https://bugs.launchpad.net/ansible-collections-openstack

View File

@@ -11,7 +11,7 @@ dependencies: {}
repository: https://opendev.org/openstack/ansible-collections-openstack
documentation: https://docs.ansible.com/ansible/latest/collections/openstack/cloud/index.html
homepage: https://opendev.org/openstack/ansible-collections-openstack
issues: https://storyboard.openstack.org/#!/project/openstack/ansible-collections-openstack
issues: https://bugs.launchpad.net/ansible-collections-openstack
build_ignore:
- "*.tar.gz"
- build_artifact
@@ -32,4 +32,4 @@ build_ignore:
- .vscode
- ansible_collections_openstack.egg-info
- changelogs
version: 2.4.1
version: 2.5.0

View File

@@ -11,7 +11,7 @@ dependencies: {}
repository: https://opendev.org/openstack/ansible-collections-openstack
documentation: https://docs.ansible.com/ansible/latest/collections/openstack/cloud/index.html
homepage: https://opendev.org/openstack/ansible-collections-openstack
issues: https://storyboard.openstack.org/#!/project/openstack/ansible-collections-openstack
issues: https://bugs.launchpad.net/ansible-collections-openstack
build_ignore:
- "*.tar.gz"
- build_artifact

View File

@@ -56,6 +56,7 @@ action_groups:
- neutron_rbac_policy
- object
- object_container
- object_containers_info
- port
- port_info
- project
@@ -77,6 +78,8 @@ action_groups:
- server_info
- server_metadata
- server_volume
- share_type
- share_type_info
- stack
- stack_info
- subnet
@@ -84,6 +87,7 @@ action_groups:
- subnets_info
- trunk
- volume
- volume_manage
- volume_backup
- volume_backup_info
- volume_info

View File

@@ -102,6 +102,12 @@ options:
- Using I(only_ipv4) helps when running Ansible in a ipv4 only setup.
type: bool
default: false
server_filters:
description:
- A dictionary of server filter value pairs.
- Available parameters can be seen under https://docs.openstack.org/api-ref/compute/#list-servers
type: dict
default: {}
show_all:
description:
- Whether all servers should be listed or not.
@@ -279,7 +285,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
clouds_yaml_path = self.get_option('clouds_yaml_path')
config_files = openstack.config.loader.CONFIG_FILES
if clouds_yaml_path:
config_files += clouds_yaml_path
config_files = clouds_yaml_path + config_files
config = openstack.config.loader.OpenStackConfig(
config_files=config_files)
@@ -309,6 +315,7 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
expand_hostvars = self.get_option('expand_hostvars')
all_projects = self.get_option('all_projects')
server_filters = self.get_option('server_filters')
servers = []
def _expand_server(server, cloud, volumes):
@@ -355,7 +362,8 @@ class InventoryModule(BaseInventoryPlugin, Constructable, Cacheable):
all_projects=all_projects,
# details are required because 'addresses'
# attribute must be populated
details=True)
details=True,
**server_filters)
]:
servers.append(server)
except openstack.exceptions.OpenStackCloudException as e:

View File

@@ -41,11 +41,11 @@ extends_documentation_fragment:
EXAMPLES = r'''
- name: Fetch all DNS zones
openstack.cloud.dns_zones:
openstack.cloud.dns_zone_info:
cloud: devstack
- name: Fetch DNS zones by name
openstack.cloud.dns_zones:
openstack.cloud.dns_zone_info:
cloud: devstack
name: ansible.test.zone.
'''

View File

@@ -128,6 +128,20 @@ options:
- Should only be used when needed, such as when the user needs the cloud to
transform image format.
type: bool
import_method:
description:
- Method to use for importing the image. Not all deployments support all methods.
- Supports web-download or glance-download.
- copy-image is not supported with create actions.
- glance-direct is removed from the import method so use_import can be used in that case.
type: str
choices: [web-download, glance-download]
uri:
description:
- Required only if using the web-download import method.
- This url is where the data is made available to the Image service.
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
'''
@@ -399,11 +413,13 @@ class ImageModule(OpenStackModule):
visibility=dict(choices=['public', 'private', 'shared', 'community']),
volume=dict(),
use_import=dict(type='bool'),
import_method=dict(choices=['web-download', 'glance-download']),
uri=dict()
)
module_kwargs = dict(
mutually_exclusive=[
('filename', 'volume'),
('filename', 'volume', 'uri'),
('visibility', 'is_public'),
],
)
@@ -412,7 +428,7 @@ class ImageModule(OpenStackModule):
attr_params = ('id', 'name', 'filename', 'disk_format',
'container_format', 'wait', 'timeout', 'is_public',
'is_protected', 'min_disk', 'min_ram', 'volume', 'tags',
'use_import')
'use_import', 'import_method', 'uri')
def _resolve_visibility(self):
"""resolve a visibility value to be compatible with older versions"""

View File

@@ -0,0 +1,202 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2024 Catalyst Cloud Limited
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r"""
---
module: object_containers_info
short_description: Fetch container info from the OpenStack Swift service.
author: OpenStack Ansible SIG
description:
- Fetch container info from the OpenStack Swift service.
options:
name:
description:
- Name of the container
type: str
aliases: ["container"]
prefix:
description:
- Filter containers by prefix
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
"""
EXAMPLES = r"""
- name: List all containers existing on the project
openstack.cloud.object_containers_info:
- name: Retrive a single container by name
openstack.cloud.object_containers_info:
name: test-container
- name: Retrieve and filter containers by prefix
openstack.cloud.object_containers_info:
prefix: test-
"""
RETURN = r"""
containers:
description: List of dictionaries describing matching containers.
returned: always
type: list
elements: dict
contains:
bytes:
description: The total number of bytes that are stored in Object Storage
for the container.
type: int
sample: 5449
bytes_used:
description: The count of bytes used in total.
type: int
sample: 5449
content_type:
description: The MIME type of the list of names.
Only fetched when searching for a container by name.
type: str
sample: null
count:
description: The number of objects in the container.
type: int
sample: 1
history_location:
description: Enables versioning on the container.
Only fetched when searching for a container by name.
type: str
sample: null
id:
description: The ID of the container. Equals I(name).
type: str
sample: "otc"
if_none_match:
description: "In combination with C(Expect: 100-Continue), specify an
C(If-None-Match: *) header to query whether the server
already has a copy of the object before any data is sent.
Only set when searching for a container by name."
type: str
sample: null
is_content_type_detected:
description: If set to C(true), Object Storage guesses the content type
based on the file extension and ignores the value sent in
the Content-Type header, if present.
Only fetched when searching for a container by name.
type: bool
sample: null
is_newest:
description: If set to True, Object Storage queries all replicas to
return the most recent one. If you omit this header, Object
Storage responds faster after it finds one valid replica.
Because setting this header to True is more expensive for
the back end, use it only when it is absolutely needed.
Only fetched when searching for a container by name.
type: bool
sample: null
meta_temp_url_key:
description: The secret key value for temporary URLs. If not set,
this header is not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
meta_temp_url_key_2:
description: A second secret key value for temporary URLs. If not set,
this header is not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
name:
description: The name of the container.
type: str
sample: "otc"
object_count:
description: The number of objects.
type: int
sample: 1
read_ACL:
description: The ACL that grants read access. If not set, this header is
not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
storage_policy:
description: Storage policy used by the container. It is not possible to
change policy of an existing container.
Only fetched when searching for a container by name.
type: str
sample: null
sync_key:
description: The secret key for container synchronization. If not set,
this header is not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
sync_to:
description: The destination for container synchronization. If not set,
this header is not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
timestamp:
description: The timestamp of the transaction.
Only fetched when searching for a container by name.
type: str
sample: null
versions_location:
description: Enables versioning on this container. The value is the name
of another container. You must UTF-8-encode and then
URL-encode the name before you include it in the header. To
disable versioning, set the header to an empty string.
Only fetched when searching for a container by name.
type: str
sample: null
write_ACL:
description: The ACL that grants write access. If not set, this header is
not returned by this operation.
Only fetched when searching for a container by name.
type: str
sample: null
"""
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import OpenStackModule
class ObjectContainersInfoModule(OpenStackModule):
argument_spec = dict(
name=dict(aliases=["container"]),
prefix=dict(),
)
module_kwargs = dict(
supports_check_mode=True,
)
def run(self):
if self.params["name"]:
containers = [
(
self.conn.object_store.get_container_metadata(
self.params["name"],
).to_dict(computed=False)
),
]
else:
query = {}
if self.params["prefix"]:
query["prefix"] = self.params["prefix"]
containers = [
c.to_dict(computed=False)
for c in self.conn.object_store.containers(**query)
]
self.exit(changed=False, containers=containers)
def main():
module = ObjectContainersInfoModule()
module()
if __name__ == "__main__":
main()

View File

@@ -239,7 +239,11 @@ class DnsRecordsetModule(OpenStackModule):
elif self._needs_update(kwargs, recordset):
recordset = self.conn.dns.update_recordset(recordset, **kwargs)
changed = True
self.exit_json(changed=changed, recordset=recordset)
# NOTE(gtema): this is a workaround to temporarily bring the
# zone_id param back which may not me populated by SDK
rs = recordset.to_dict(computed=False)
rs["zone_id"] = zone.id
self.exit_json(changed=changed, recordset=rs)
elif state == 'absent' and recordset is not None:
self.conn.dns.delete_recordset(recordset)
changed = True

View File

@@ -19,7 +19,9 @@ options:
- Valid only with keystone version 3.
- Required if I(project) is not specified.
- When I(project) is specified, then I(domain) will not be used for
scoping the role association, only for finding resources.
scoping the role association, only for finding resources. Deprecated
for finding resources, please use I(group_domain), I(project_domain),
I(role_domain), or I(user_domain).
- "When scoping the role association, I(project) has precedence over
I(domain) and I(domain) has precedence over I(system): When I(project)
is specified, then I(domain) and I(system) are not used for role
@@ -32,24 +34,45 @@ options:
- Valid only with keystone version 3.
- If I(group) is not specified, then I(user) is required. Both may not be
specified at the same time.
- You can supply I(group_domain) or the deprecated usage of I(domain) to
find group resources.
type: str
group_domain:
description:
- Name or ID for the domain.
- Valid only with keystone version 3.
- Only valid for finding group resources.
type: str
project:
description:
- Name or ID of the project to scope the role association to.
- If you are using keystone version 2, then this value is required.
- When I(project) is specified, then I(domain) will not be used for
scoping the role association, only for finding resources.
scoping the role association, only for finding resources. Prefer
I(group_domain) over I(domain).
- "When scoping the role association, I(project) has precedence over
I(domain) and I(domain) has precedence over I(system): When I(project)
is specified, then I(domain) and I(system) are not used for role
association. When I(domain) is specified, then I(system) will not be
used for role association."
type: str
project_domain:
description:
- Name or ID for the domain.
- Valid only with keystone version 3.
- Only valid for finding project resources.
type: str
role:
description:
- Name or ID for the role.
required: true
type: str
role_domain:
description:
- Name or ID for the domain.
- Valid only with keystone version 3.
- Only valid for finding role resources.
type: str
state:
description:
- Should the roles be present or absent on the user.
@@ -73,6 +96,12 @@ options:
- If I(user) is not specified, then I(group) is required. Both may not be
specified at the same time.
type: str
user_domain:
description:
- Name or ID for the domain.
- Valid only with keystone version 3.
- Only valid for finding user resources.
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
'''
@@ -101,11 +130,15 @@ class IdentityRoleAssignmentModule(OpenStackModule):
argument_spec = dict(
domain=dict(),
group=dict(),
group_domain=dict(type='str'),
project=dict(),
project_domain=dict(type='str'),
role=dict(required=True),
role_domain=dict(type='str'),
state=dict(default='present', choices=['absent', 'present']),
system=dict(),
user=dict(),
user_domain=dict(type='str'),
)
module_kwargs = dict(
@@ -113,17 +146,33 @@ class IdentityRoleAssignmentModule(OpenStackModule):
('user', 'group'),
('domain', 'project', 'system'),
],
mutually_exclusive=[
('user', 'group'),
('project', 'system'), # domain should be part of this
],
supports_check_mode=True
)
def _find_domain_id(self, domain):
if domain is not None:
domain = self.conn.identity.find_domain(domain,
ignore_missing=False)
return dict(domain_id=domain['id'])
return dict()
def run(self):
filters = {}
find_filters = {}
kwargs = {}
group_find_filters = {}
project_find_filters = {}
role_find_filters = {}
user_find_filters = {}
role_find_filters.update(self._find_domain_id(
self.params['role_domain']))
role_name_or_id = self.params['role']
role = self.conn.identity.find_role(role_name_or_id,
ignore_missing=False)
ignore_missing=False,
**role_find_filters)
filters['role_id'] = role['id']
domain_name_or_id = self.params['domain']
@@ -131,22 +180,31 @@ class IdentityRoleAssignmentModule(OpenStackModule):
domain = self.conn.identity.find_domain(
domain_name_or_id, ignore_missing=False)
filters['scope_domain_id'] = domain['id']
find_filters['domain_id'] = domain['id']
kwargs['domain'] = domain['id']
group_find_filters['domain_id'] = domain['id']
project_find_filters['domain_id'] = domain['id']
user_find_filters['domain_id'] = domain['id']
user_name_or_id = self.params['user']
if user_name_or_id is not None:
user_find_filters.update(self._find_domain_id(
self.params['user_domain']))
user = self.conn.identity.find_user(
user_name_or_id, ignore_missing=False, **find_filters)
user_name_or_id, ignore_missing=False,
**user_find_filters)
filters['user_id'] = user['id']
kwargs['user'] = user['id']
else:
user = None
group_name_or_id = self.params['group']
if group_name_or_id is not None:
group_find_filters.update(self._find_domain_id(
self.params['group_domain']))
group = self.conn.identity.find_group(
group_name_or_id, ignore_missing=False, **find_filters)
group_name_or_id, ignore_missing=False,
**group_find_filters)
filters['group_id'] = group['id']
kwargs['group'] = group['id']
else:
group = None
system_name = self.params['system']
if system_name is not None:
@@ -154,14 +212,14 @@ class IdentityRoleAssignmentModule(OpenStackModule):
if 'scope_domain_id' not in filters:
filters['scope.system'] = system_name
kwargs['system'] = system_name
project_name_or_id = self.params['project']
if project_name_or_id is not None:
project_find_filters.update(self._find_domain_id(
self.params['project_domain']))
project = self.conn.identity.find_project(
project_name_or_id, ignore_missing=False, **find_filters)
project_name_or_id, ignore_missing=False,
**project_find_filters)
filters['scope_project_id'] = project['id']
kwargs['project'] = project['id']
# project has precedence over domain and system
filters.pop('scope_domain_id', None)
@@ -176,10 +234,50 @@ class IdentityRoleAssignmentModule(OpenStackModule):
or (state == 'absent' and role_assignments)))
if state == 'present' and not role_assignments:
self.conn.grant_role(role['id'], **kwargs)
if 'scope_domain_id' in filters:
if user is not None:
self.conn.identity.assign_domain_role_to_user(
filters['scope_domain_id'], user, role)
else:
self.conn.identity.assign_domain_role_to_group(
filters['scope_domain_id'], group, role)
elif 'scope_project_id' in filters:
if user is not None:
self.conn.identity.assign_project_role_to_user(
filters['scope_project_id'], user, role)
else:
self.conn.identity.assign_project_role_to_group(
filters['scope_project_id'], group, role)
elif 'scope.system' in filters:
if user is not None:
self.conn.identity.assign_system_role_to_user(
user, role, filters['scope.system'])
else:
self.conn.identity.assign_system_role_to_group(
group, role, filters['scope.system'])
self.exit_json(changed=True)
elif state == 'absent' and role_assignments:
self.conn.revoke_role(role['id'], **kwargs)
if 'scope_domain_id' in filters:
if user is not None:
self.conn.identity.unassign_domain_role_from_user(
filters['scope_domain_id'], user, role)
else:
self.conn.identity.unassign_domain_role_from_group(
filters['scope_domain_id'], group, role)
elif 'scope_project_id' in filters:
if user is not None:
self.conn.identity.unassign_project_role_from_user(
filters['scope_project_id'], user, role)
else:
self.conn.identity.unassign_project_role_from_group(
filters['scope_project_id'], group, role)
elif 'scope.system' in filters:
if user is not None:
self.conn.identity.unassign_system_role_from_user(
user, role, filters['scope.system'])
else:
self.conn.identity.unassign_system_role_from_group(
group, role, filters['scope.system'])
self.exit_json(changed=True)
else:
self.exit_json(changed=False)

View File

@@ -372,6 +372,10 @@ class RouterModule(OpenStackModule):
for p in external_fixed_ips:
if 'ip_address' in p:
req_fip_map[p['subnet_id']].add(p['ip_address'])
elif p['subnet_id'] in cur_fip_map:
# handle idempotence of updating with no explicit ip
req_fip_map[p['subnet_id']].update(
cur_fip_map[p['subnet_id']])
# Check if external ip addresses need to be added
for fip in external_fixed_ips:
@@ -464,7 +468,7 @@ class RouterModule(OpenStackModule):
subnet = self.conn.network.find_subnet(
iface['subnet_id'], ignore_missing=False, **filters)
fip = dict(subnet_id=subnet.id)
if 'ip_address' in iface:
if iface.get('ip_address', None) is not None:
fip['ip_address'] = iface['ip_address']
external_fixed_ips.append(fip)

View File

@@ -136,6 +136,9 @@ class ServerActionModule(OpenStackModule):
# rebuild does not depend on state
will_change = (
(action == 'rebuild')
# `reboot_*` actions do not change state, servers remain `ACTIVE`
or (action == 'reboot_hard')
or (action == 'reboot_soft')
or (action == 'lock' and not server['is_locked'])
or (action == 'unlock' and server['is_locked'])
or server.status.lower() not in [a.lower()

View File

@@ -0,0 +1,520 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2025 VEXXHOST, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r"""
---
module: share_type
short_description: Manage OpenStack share type
author: OpenStack Ansible SIG
description:
- Add, remove or update share types in OpenStack Manila.
options:
name:
description:
- Share type name or id.
- For private share types, the UUID must be used instead of name.
required: true
type: str
description:
description:
- Description of the share type.
type: str
extra_specs:
description:
- Dictionary of share type extra specifications
type: dict
is_public:
description:
- Make share type accessible to the public.
- Can be updated after creation using Manila API direct updates.
type: bool
default: true
driver_handles_share_servers:
description:
- Boolean flag indicating whether share servers are managed by the driver.
- Required for share type creation.
- This is automatically added to extra_specs as 'driver_handles_share_servers'.
type: bool
default: true
state:
description:
- Indicate desired state of the resource.
choices: ['present', 'absent']
default: present
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
"""
EXAMPLES = r"""
- name: Delete share type by name
openstack.cloud.share_type:
name: test_share_type
state: absent
- name: Delete share type by id
openstack.cloud.share_type:
name: fbadfa6b-5f17-4c26-948e-73b94de57b42
state: absent
- name: Create share type
openstack.cloud.share_type:
name: manila-generic-share
state: present
driver_handles_share_servers: true
extra_specs:
share_backend_name: GENERIC_BACKEND
snapshot_support: true
create_share_from_snapshot_support: true
description: Generic share type
is_public: true
"""
RETURN = """
share_type:
description: Dictionary describing share type
returned: On success when I(state) is 'present'
type: dict
contains:
name:
description: share type name
returned: success
type: str
sample: manila-generic-share
extra_specs:
description: share type extra specifications
returned: success
type: dict
sample: {"share_backend_name": "GENERIC_BACKEND", "snapshot_support": "true"}
is_public:
description: whether the share type is public
returned: success
type: bool
sample: True
description:
description: share type description
returned: success
type: str
sample: Generic share type
driver_handles_share_servers:
description: whether driver handles share servers
returned: success
type: bool
sample: true
id:
description: share type uuid
returned: success
type: str
sample: b75d8c5c-a6d8-4a5d-8c86-ef4f1298525d
"""
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule,
)
# Manila API microversion 2.50 provides complete share type information
# including is_default field and description
# Reference: https://docs.openstack.org/api-ref/shared-file-system/#show-share-type-detail
MANILA_MICROVERSION = "2.50"
class ShareTypeModule(OpenStackModule):
argument_spec = dict(
name=dict(type="str", required=True),
description=dict(type="str", required=False),
extra_specs=dict(type="dict", required=False),
is_public=dict(type="bool", default=True),
driver_handles_share_servers=dict(type="bool", default=True),
state=dict(type="str", default="present", choices=["absent", "present"]),
)
module_kwargs = dict(
required_if=[("state", "present", ["driver_handles_share_servers"])],
supports_check_mode=True,
)
@staticmethod
def _extract_result(details):
if details is not None:
if hasattr(details, "to_dict"):
result = details.to_dict(computed=False)
elif isinstance(details, dict):
result = details.copy()
else:
result = dict(details) if details else {}
# Normalize is_public field from API response
if result and "os-share-type-access:is_public" in result:
result["is_public"] = result["os-share-type-access:is_public"]
elif result and "share_type_access:is_public" in result:
result["is_public"] = result["share_type_access:is_public"]
return result
return {}
def _find_share_type(self, name_or_id):
"""
Find share type by name or ID with comprehensive information.
Uses direct Manila API calls since SDK methods are not available.
Handles both public and private share types.
"""
# Try direct access first for complete information
share_type = self._find_by_direct_access(name_or_id)
if share_type:
return share_type
# If direct access fails, try searching in public listing
# This handles cases where we have the name but need to find the ID
try:
response = self.conn.shared_file_system.get("/types")
share_types = response.json().get("share_types", [])
for share_type in share_types:
if share_type["name"] == name_or_id or share_type["id"] == name_or_id:
# Found by name, now get complete info using the ID
result = self._find_by_direct_access(share_type["id"])
if result:
return result
except Exception:
pass
return None
def _find_by_direct_access(self, name_or_id):
"""
Find share type by direct access using Manila API.
Uses microversion to get complete information including description and is_default.
Falls back to basic API if microversion is not supported.
"""
# Try with microversion first for complete information
try:
response = self.conn.shared_file_system.get(
f"/types/{name_or_id}", microversion=MANILA_MICROVERSION
)
share_type_data = response.json().get("share_type", {})
if share_type_data:
return share_type_data
except Exception:
pass
# Fallback: try without microversion for basic information
try:
response = self.conn.shared_file_system.get(f"/types/{name_or_id}")
share_type_data = response.json().get("share_type", {})
if share_type_data:
return share_type_data
except Exception:
pass
return None
def run(self):
state = self.params["state"]
name_or_id = self.params["name"]
# Find existing share type (similar to volume_type.py pattern)
share_type = self._find_share_type(name_or_id)
if self.ansible.check_mode:
self.exit_json(changed=self._will_change(state, share_type))
if state == "present" and not share_type:
# Create type
create_result = self._create()
share_type = self._extract_result(create_result)
self.exit_json(changed=True, share_type=share_type)
elif state == "present" and share_type:
# Update type
update = self._build_update(share_type)
update_result = self._update(share_type, update)
share_type = self._extract_result(update_result)
self.exit_json(changed=bool(update), share_type=share_type)
elif state == "absent" and share_type:
# Delete type
self._delete(share_type)
self.exit_json(changed=True)
else:
# state == 'absent' and not share_type
self.exit_json(changed=False)
def _build_update(self, share_type):
return {
**self._build_update_extra_specs(share_type),
**self._build_update_share_type(share_type),
}
def _build_update_extra_specs(self, share_type):
update = {}
old_extra_specs = share_type.get("extra_specs", {})
# Build the complete new extra specs including driver_handles_share_servers
new_extra_specs = {}
# Add driver_handles_share_servers (always required)
if self.params.get("driver_handles_share_servers") is not None:
new_extra_specs["driver_handles_share_servers"] = str(
self.params["driver_handles_share_servers"]
).title()
# Add user-defined extra specs
if self.params.get("extra_specs"):
new_extra_specs.update(
{k: str(v) for k, v in self.params["extra_specs"].items()}
)
delete_extra_specs_keys = set(old_extra_specs.keys()) - set(
new_extra_specs.keys()
)
if delete_extra_specs_keys:
update["delete_extra_specs_keys"] = delete_extra_specs_keys
if old_extra_specs != new_extra_specs:
update["create_extra_specs"] = new_extra_specs
return update
def _build_update_share_type(self, share_type):
update = {}
# Only allow description updates - name is used for identification
allowed_attributes = ["description"]
# Handle is_public updates - CLI supports this, so we should too
# Always check is_public since it has a default value of True
current_is_public = share_type.get(
"os-share-type-access:is_public",
share_type.get("share_type_access:is_public"),
)
requested_is_public = self.params["is_public"] # Will be True by default now
if current_is_public != requested_is_public:
# Mark this as needing a special access update
update["update_access"] = {
"is_public": requested_is_public,
"share_type_id": share_type.get("id"),
}
type_attributes = {
k: self.params[k]
for k in allowed_attributes
if k in self.params
and self.params.get(k) is not None
and self.params.get(k) != share_type.get(k)
}
if type_attributes:
update["type_attributes"] = type_attributes
return update
def _create(self):
share_type_attrs = {"name": self.params["name"]}
if self.params.get("description") is not None:
share_type_attrs["description"] = self.params["description"]
# Handle driver_handles_share_servers - this is the key required parameter
extra_specs = {}
if self.params.get("driver_handles_share_servers") is not None:
extra_specs["driver_handles_share_servers"] = str(
self.params["driver_handles_share_servers"]
).title()
# Add user-defined extra specs
if self.params.get("extra_specs"):
extra_specs.update(
{k: str(v) for k, v in self.params["extra_specs"].items()}
)
if extra_specs:
share_type_attrs["extra_specs"] = extra_specs
# Handle is_public parameter - field name depends on API version
if self.params.get("is_public") is not None:
# For microversion (API 2.7+), use share_type_access:is_public
# For older versions, use os-share-type-access:is_public
share_type_attrs["share_type_access:is_public"] = self.params["is_public"]
# Also include legacy field for compatibility
share_type_attrs["os-share-type-access:is_public"] = self.params[
"is_public"
]
try:
payload = {"share_type": share_type_attrs}
# Try with microversion first (supports share_type_access:is_public)
try:
response = self.conn.shared_file_system.post(
"/types", json=payload, microversion=MANILA_MICROVERSION
)
share_type_data = response.json().get("share_type", {})
except Exception:
# Fallback: try without microversion (uses os-share-type-access:is_public)
# Remove the newer field name for older API compatibility
if "share_type_access:is_public" in share_type_attrs:
del share_type_attrs["share_type_access:is_public"]
payload = {"share_type": share_type_attrs}
response = self.conn.shared_file_system.post("/types", json=payload)
share_type_data = response.json().get("share_type", {})
return share_type_data
except Exception as e:
self.fail_json(msg=f"Failed to create share type: {str(e)}")
def _delete(self, share_type):
# Use direct API call since SDK method may not exist
try:
share_type_id = (
share_type.get("id") if isinstance(share_type, dict) else share_type.id
)
# Try with microversion first, fallback if not supported
try:
self.conn.shared_file_system.delete(
f"/types/{share_type_id}", microversion=MANILA_MICROVERSION
)
except Exception:
self.conn.shared_file_system.delete(f"/types/{share_type_id}")
except Exception as e:
self.fail_json(msg=f"Failed to delete share type: {str(e)}")
def _update(self, share_type, update):
if not update:
return share_type
share_type = self._update_share_type(share_type, update)
share_type = self._update_extra_specs(share_type, update)
share_type = self._update_access(share_type, update)
return share_type
def _update_extra_specs(self, share_type, update):
share_type_id = (
share_type.get("id") if isinstance(share_type, dict) else share_type.id
)
delete_extra_specs_keys = update.get("delete_extra_specs_keys")
if delete_extra_specs_keys:
for key in delete_extra_specs_keys:
try:
# Try with microversion first, fallback if not supported
try:
self.conn.shared_file_system.delete(
f"/types/{share_type_id}/extra_specs/{key}",
microversion=MANILA_MICROVERSION,
)
except Exception:
self.conn.shared_file_system.delete(
f"/types/{share_type_id}/extra_specs/{key}"
)
except Exception as e:
self.fail_json(msg=f"Failed to delete extra spec '{key}': {str(e)}")
# refresh share_type information
share_type = self._find_share_type(share_type_id)
create_extra_specs = update.get("create_extra_specs")
if create_extra_specs:
# Convert values to strings as Manila API expects string values
string_specs = {k: str(v) for k, v in create_extra_specs.items()}
try:
# Try with microversion first, fallback if not supported
try:
self.conn.shared_file_system.post(
f"/types/{share_type_id}/extra_specs",
json={"extra_specs": string_specs},
microversion=MANILA_MICROVERSION,
)
except Exception:
self.conn.shared_file_system.post(
f"/types/{share_type_id}/extra_specs",
json={"extra_specs": string_specs},
)
except Exception as e:
self.fail_json(msg=f"Failed to update extra specs: {str(e)}")
# refresh share_type information
share_type = self._find_share_type(share_type_id)
return share_type
def _update_access(self, share_type, update):
"""Update share type access (public/private) using direct API update"""
access_update = update.get("update_access")
if not access_update:
return share_type
share_type_id = access_update["share_type_id"]
is_public = access_update["is_public"]
try:
# Use direct update with share_type_access:is_public (works for both public and private)
update_payload = {"share_type": {"share_type_access:is_public": is_public}}
try:
self.conn.shared_file_system.put(
f"/types/{share_type_id}",
json=update_payload,
microversion=MANILA_MICROVERSION,
)
except Exception:
# Fallback: try with legacy field name for older API versions
update_payload = {
"share_type": {"os-share-type-access:is_public": is_public}
}
self.conn.shared_file_system.put(
f"/types/{share_type_id}", json=update_payload
)
# Refresh share type information after access change
share_type = self._find_share_type(share_type_id)
except Exception as e:
self.fail_json(msg=f"Failed to update share type access: {str(e)}")
return share_type
def _update_share_type(self, share_type, update):
type_attributes = update.get("type_attributes")
if type_attributes:
share_type_id = (
share_type.get("id") if isinstance(share_type, dict) else share_type.id
)
try:
# Try with microversion first, fallback if not supported
try:
response = self.conn.shared_file_system.put(
f"/types/{share_type_id}",
json={"share_type": type_attributes},
microversion=MANILA_MICROVERSION,
)
except Exception:
response = self.conn.shared_file_system.put(
f"/types/{share_type_id}", json={"share_type": type_attributes}
)
updated_type = response.json().get("share_type", {})
return updated_type
except Exception as e:
self.fail_json(msg=f"Failed to update share type: {str(e)}")
return share_type
def _will_change(self, state, share_type):
if state == "present" and not share_type:
return True
if state == "present" and share_type:
return bool(self._build_update(share_type))
if state == "absent" and share_type:
return True
return False
def main():
module = ShareTypeModule()
module()
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,239 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2025 VEXXHOST, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r"""
---
module: share_type_info
short_description: Get OpenStack share type details
author: OpenStack Ansible SIG
description:
- Get share type details in OpenStack Manila.
- Get share type access details for private share types.
- Uses Manila API microversion 2.50 to retrieve complete share type information including is_default field.
- Safely falls back to basic information if microversion 2.50 is not supported by the backend.
- Private share types can only be accessed by UUID.
options:
name:
description:
- Share type name or id.
- For private share types, the UUID must be used instead of name.
required: true
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
"""
EXAMPLES = r"""
- name: Get share type details
openstack.cloud.share_type_info:
name: manila-generic-share
- name: Get share type details by id
openstack.cloud.share_type_info:
name: fbadfa6b-5f17-4c26-948e-73b94de57b42
"""
RETURN = """
share_type:
description: Dictionary describing share type
returned: On success
type: dict
contains:
id:
description: share type uuid
returned: success
type: str
sample: 59575cfc-3582-4efc-8eee-f47fcb25ea6b
name:
description: share type name
returned: success
type: str
sample: default
description:
description:
- share type description
- Available when Manila API microversion 2.50 is supported
- Falls back to empty string if microversion is not available
returned: success
type: str
sample: "Default Manila share type"
is_default:
description:
- whether this is the default share type
- Retrieved from the API response when microversion 2.50 is supported
- Falls back to null if microversion is not available or field is not present
returned: success
type: bool
sample: true
is_public:
description: whether the share type is public (true) or private (false)
returned: success
type: bool
sample: true
required_extra_specs:
description: Required extra specifications for the share type
returned: success
type: dict
sample: {"driver_handles_share_servers": "True"}
optional_extra_specs:
description: Optional extra specifications for the share type
returned: success
type: dict
sample: {"snapshot_support": "True", "create_share_from_snapshot_support": "True"}
"""
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule,
)
# Manila API microversion 2.50 provides complete share type information
# including is_default field and description
# Reference: https://docs.openstack.org/api-ref/shared-file-system/#show-share-type-detail
MANILA_MICROVERSION = "2.50"
class ShareTypeInfoModule(OpenStackModule):
argument_spec = dict(name=dict(type="str", required=True))
module_kwargs = dict(
supports_check_mode=True,
)
def __init__(self, **kwargs):
super(ShareTypeInfoModule, self).__init__(**kwargs)
def _find_share_type(self, name_or_id):
"""
Find share type by name or ID with comprehensive information.
"""
share_type = self._find_by_direct_access(name_or_id)
if share_type:
return share_type
# If direct access fails, try searching in public listing
# This handles cases where we have the name but need to find the ID
try:
response = self.conn.shared_file_system.get("/types")
share_types = response.json().get("share_types", [])
for share_type in share_types:
if share_type["name"] == name_or_id or share_type["id"] == name_or_id:
# Found by name, now get complete info using the ID
result = self._find_by_direct_access(share_type["id"])
if result:
return result
except Exception:
pass
return None
def _find_by_direct_access(self, name_or_id):
"""
Find share type by direct access (for private share types).
"""
try:
response = self.conn.shared_file_system.get(
f"/types/{name_or_id}", microversion=MANILA_MICROVERSION
)
share_type_data = response.json().get("share_type", {})
if share_type_data:
return share_type_data
except Exception:
pass
# Fallback: try without microversion for basic information
try:
response = self.conn.shared_file_system.get(f"/types/{name_or_id}")
share_type_data = response.json().get("share_type", {})
if share_type_data:
return share_type_data
except Exception:
pass
return None
def _normalize_share_type_dict(self, share_type_dict):
"""
Normalize share type dictionary to match CLI output format.
"""
# Extract extra specs information
extra_specs = share_type_dict.get("extra_specs", {})
required_extra_specs = share_type_dict.get("required_extra_specs", {})
# Optional extra specs are those in extra_specs but not in required_extra_specs
optional_extra_specs = {
key: value
for key, value in extra_specs.items()
if key not in required_extra_specs
}
# Determine if this is the default share type
# Use the is_default field from API response (available with microversion 2.50)
# If not available (older API versions), default to None
is_default = share_type_dict.get("is_default", None)
# Handle the description field - available through microversion 2.50
# Convert None to empty string if API returns null
description = share_type_dict.get("description") or ""
# Determine visibility - check both new and legacy field names
# Use the same logic as share_type.py for consistency
is_public = share_type_dict.get(
"os-share-type-access:is_public",
share_type_dict.get("share_type_access:is_public"),
)
# Build the normalized dictionary matching CLI output
normalized = {
"id": share_type_dict.get("id"),
"name": share_type_dict.get("name"),
"is_public": is_public,
"is_default": is_default,
"required_extra_specs": required_extra_specs,
"optional_extra_specs": optional_extra_specs,
"description": description,
}
return normalized
def run(self):
"""
Main execution method following OpenStackModule pattern.
Retrieves share type information using Manila API microversion for complete
details including description and is_default fields. Falls back gracefully to
basic API calls if microversion is not supported by the backend.
"""
name_or_id = self.params["name"]
share_type = self._find_share_type(name_or_id)
if not share_type:
self.fail_json(
msg=f"Share type '{name_or_id}' not found. "
f"If this is a private share type, use its UUID instead of name."
)
if hasattr(share_type, "to_dict"):
share_type_dict = share_type.to_dict()
elif isinstance(share_type, dict):
share_type_dict = share_type
else:
share_type_dict = dict(share_type) if share_type else {}
# Normalize the output to match CLI format
normalized_share_type = self._normalize_share_type_dict(share_type_dict)
# Return results in the standard format
result = dict(changed=False, share_type=normalized_share_type)
return result
def main():
module = ShareTypeInfoModule()
module()
if __name__ == "__main__":
main()

View File

@@ -229,8 +229,10 @@ class StackInfoModule(OpenStackModule):
if self.params[k] is not None:
kwargs[k] = self.params[k]
stacks = [stack.to_dict(computed=False)
for stack in self.conn.orchestration.stacks(**kwargs)]
stacks = []
for stack in self.conn.orchestration.stacks(**kwargs):
stack_obj = self.conn.orchestration.get_stack(stack.id)
stacks.append(stack_obj.to_dict(computed=False))
self.exit_json(changed=False, stacks=stacks)

View File

@@ -382,6 +382,8 @@ class SubnetModule(OpenStackModule):
params['allocation_pools'] = self.params['allocation_pools']
params = self._add_extra_attrs(params)
params = {k: v for k, v in params.items() if v is not None}
if self.params['disable_gateway_ip']:
params['gateway_ip'] = None
return params
def _build_updates(self, subnet, params):

View File

@@ -0,0 +1,309 @@
#!/usr/bin/python
# -*- coding: utf-8 -*-
# Copyright (c) 2025 by Pure Storage, Inc.
# GNU General Public License v3.0+ (see COPYING or https://www.gnu.org/licenses/gpl-3.0.txt)
DOCUMENTATION = r"""
---
module: volume_manage
short_description: Manage/Unmanage Volumes
author: OpenStack Ansible SIG
description:
- Manage or Unmanage Volume in OpenStack.
options:
description:
description:
- String describing the volume
type: str
metadata:
description: Metadata for the volume
type: dict
name:
description:
- Name of the volume to be unmanaged or
the new name of a managed volume
- When I(state) is C(absent) this must be
the cinder volume ID
required: true
type: str
state:
description:
- Should the resource be present or absent.
choices: [present, absent]
default: present
type: str
bootable:
description:
- Bootable flag for volume.
type: bool
default: False
volume_type:
description:
- Volume type for volume
type: str
availability_zone:
description:
- The availability zone.
type: str
host:
description:
- Cinder host on which the existing volume resides
- Takes the form "host@backend-name#pool"
- Required when I(state) is C(present).
type: str
source_name:
description:
- Name of existing volume
type: str
source_id:
description:
- Identifier of existing volume
type: str
extends_documentation_fragment:
- openstack.cloud.openstack
"""
RETURN = r"""
volume:
description: Cinder's representation of the volume object
returned: always
type: dict
contains:
attachments:
description: Instance attachment information. For a amanaged volume, this
will always be empty.
type: list
availability_zone:
description: The name of the availability zone.
type: str
consistency_group_id:
description: The UUID of the consistency group.
type: str
created_at:
description: The date and time when the resource was created.
type: str
description:
description: The volume description.
type: str
extended_replication_status:
description: Extended replication status on this volume.
type: str
group_id:
description: The ID of the group.
type: str
host:
description: The volume's current back-end.
type: str
id:
description: The UUID of the volume.
type: str
image_id:
description: Image on which the volume was based
type: str
is_bootable:
description: Enables or disables the bootable attribute. You can boot an
instance from a bootable volume.
type: str
is_encrypted:
description: If true, this volume is encrypted.
type: bool
is_multiattach:
description: Whether this volume can be attached to more than one
server.
type: bool
metadata:
description: A metadata object. Contains one or more metadata key and
value pairs that are associated with the volume.
type: dict
migration_id:
description: The volume ID that this volume name on the backend is
based on.
type: str
migration_status:
description: The status of this volume migration (None means that a
migration is not currently in progress).
type: str
name:
description: The volume name.
type: str
project_id:
description: The project ID which the volume belongs to.
type: str
replication_driver_data:
description: Data set by the replication driver
type: str
replication_status:
description: The volume replication status.
type: str
scheduler_hints:
description: Scheduler hints for the volume
type: dict
size:
description: The size of the volume, in gibibytes (GiB).
type: int
snapshot_id:
description: To create a volume from an existing snapshot, specify the
UUID of the volume snapshot. The volume is created in same
availability zone and with same size as the snapshot.
type: str
source_volume_id:
description: The UUID of the source volume. The API creates a new volume
with the same size as the source volume unless a larger size
is requested.
type: str
status:
description: The volume status.
type: str
updated_at:
description: The date and time when the resource was updated.
type: str
user_id:
description: The UUID of the user.
type: str
volume_image_metadata:
description: List of image metadata entries. Only included for volumes
that were created from an image, or from a snapshot of a
volume originally created from an image.
type: dict
volume_type:
description: The associated volume type name for the volume.
type: str
"""
EXAMPLES = r"""
- name: Manage volume
openstack.cloud.volume_manage:
name: newly-managed-vol
source_name: manage-me
host: host@backend-name#pool
- name: Unmanage volume
openstack.cloud.volume_manage:
name: "5c831866-3bb3-4d67-a7d3-1b90880c9d18"
state: absent
"""
from ansible_collections.openstack.cloud.plugins.module_utils.openstack import (
OpenStackModule,
)
class VolumeManageModule(OpenStackModule):
argument_spec = dict(
description=dict(type="str"),
metadata=dict(type="dict"),
source_name=dict(type="str"),
source_id=dict(type="str"),
availability_zone=dict(type="str"),
host=dict(type="str"),
bootable=dict(default="false", type="bool"),
volume_type=dict(type="str"),
name=dict(required=True, type="str"),
state=dict(
default="present", choices=["absent", "present"], type="str"
),
)
module_kwargs = dict(
required_if=[("state", "present", ["host"])],
supports_check_mode=True,
)
def run(self):
name = self.params["name"]
state = self.params["state"]
changed = False
if state == "present":
changed = True
if not self.ansible.check_mode:
volumes = self._manage_list()
manageable = volumes["manageable-volumes"]
safe_to_manage = self._is_safe_to_manage(
manageable, self.params["source_name"]
)
if not safe_to_manage:
self.exit_json(changed=False)
volume = self._manage()
if volume:
self.exit_json(
changed=changed, volume=volume.to_dict(computed=False)
)
else:
self.exit_json(changed=False)
else:
self.exit_json(changed=changed)
else:
volume = self.conn.block_storage.find_volume(name)
if volume:
changed = True
if not self.ansible.check_mode:
self._unmanage()
self.exit_json(changed=changed)
else:
self.exit_json(changed=changed)
def _is_safe_to_manage(self, manageable_list, target_name):
entry = next(
(
v
for v in manageable_list
if isinstance(v.get("reference"), dict)
and (
v["reference"].get("name") == target_name
or v["reference"].get("source-name") == target_name
)
),
None,
)
if entry is None:
return False
return entry.get("safe_to_manage", False)
def _manage(self):
kwargs = {
key: self.params[key]
for key in [
"description",
"bootable",
"volume_type",
"availability_zone",
"host",
"metadata",
"name",
]
if self.params.get(key) is not None
}
kwargs["ref"] = {}
if self.params["source_name"]:
kwargs["ref"]["source-name"] = self.params["source_name"]
if self.params["source_id"]:
kwargs["ref"]["source-id"] = self.params["source_id"]
volume = self.conn.block_storage.manage_volume(**kwargs)
return volume
def _manage_list(self):
response = self.conn.block_storage.get(
"/manageable_volumes?host=" + self.params["host"],
microversion="3.8",
)
response.raise_for_status()
manageable_volumes = response.json()
return manageable_volumes
def _unmanage(self):
self.conn.block_storage.unmanage_volume(self.params["name"])
def main():
module = VolumeManageModule()
module()
if __name__ == "__main__":
main()

View File

@@ -24,7 +24,7 @@ echo "Running test with Python version ${PY_VER}"
rm -rf "${ANSIBLE_COLLECTIONS_PATH}"
mkdir -p ${ANSIBLE_COLLECTIONS_PATH}/ansible_collections/openstack/cloud
cp -a ${TOXDIR}/{plugins,meta,tests,docs} ${ANSIBLE_COLLECTIONS_PATH}/ansible_collections/openstack/cloud
cp -a ${TOXDIR}/{plugins,meta,tests,docs,galaxy.yml} ${ANSIBLE_COLLECTIONS_PATH}/ansible_collections/openstack/cloud
cd ${ANSIBLE_COLLECTIONS_PATH}/ansible_collections/openstack/cloud/
echo "Running ansible-test with version:"
ansible --version